# Where is the difference here?

Since my Cornell conference contribution has generated dozens of critical comments on another thread, I feel compelled to respond. I hope this is the last time I ever have to talk about this topic, I’m really tired of it.

Here are two scenarios:

1. A tornado hits a town, turning houses and cars into rubble. Then, another tornado hits, and turns the rubble back into houses and cars.

2. The atoms on a barren planet spontaneously rearrange themselves, with the help of solar energy and under the direction of four unintelligent forces of physics alone, into humans, cars, high-speed computers, libraries full of science texts and encyclopedias, TV sets, airplanes and spaceships. Then, the sun explodes into a supernova, and, with the help of solar energy, all of these things turn back into dust.

It is almost universally agreed in the scientific community that the second stage (but not the first) of scenario 1 would violate the second law of thermodynamics, at least the more general statements of this law (eg, “In an isolated system, the direction of spontaneous change is from order to disorder” see footnote 4 in my paper). It is also almost universally agreed that the first stage of scenario 2 does not violate the second law. (Of course, everyone agrees that there is no conflict in the second stage.) Why, what is the difference here?

Every general physics book which discusses evolution and the second law argues that the first stage of scenario 2 does not violate the second law because the Earth is an open system, and entropy can decrease in an open system as long as the decrease is compensated by increases outside the Earth. I gave several examples of this argument in section 1, if you can find a single general physics text anywhere which makes a different argument in claiming that evolution does not violate the second law, let me know which one.

Well, this same compensation argument can equally well be used to argue that the second tornado in scenario 1 does not violate the second law: the Earth is an open system, tornados receive their energy from the sun, any decrease in entropy due to a tornado that turns rubble into houses and cars is easily compensated by increases outside the Earth. It is difficult to define or measure entropy in scenario 2, but it is equally difficult in scenario 1.

I’ll save you the trouble: there is only one reason why nearly everyone agrees that the second law is violated in scenario 1 and not scenario 2: because there is a widely believed theory as to how the evolution of life and of human intelligence happened, while there is no widely believed theory as to how a tornado could turn rubble into houses and cars. There is no other argument which can be made as to why the second law is not violated in scenario 2, that could not equally well be applied to argue that it is not violated in scenario 1 either.

Well, in this paper, and every other piece I have written on this topic, including my new Bio-Complexity paper , and the video below, I have acknowledged that, if you really can explain scenario 2, then it does not violate the basic principle behind the second law. In my conclusions in the Cornell contribution, I wrote:

Of course, one can still argue that the spectacular increase in order seen on Earth is consistent with the underlying principle behind the second law, because what has happened here is not really extremely improbable. One can still argue that once upon a time…a collection of atoms formed by pure chance that was able to duplicate itself, and these complex collections of atoms were able to pass their complex structures on to their descendents generation after generation, even correcting errors. One can still argue that, after a long time, the accumulation of genetic accidents resulted in greater and greater information content in the DNA of these more and more complex collections of atoms, and eventually something called “intelligence” allowed some of these collections of atoms to design cars and trucks and spaceships and nuclear power plants. One can still argue that it only seems extremely improbable, but really isn’t, that under the right conditions, the influx of stellar energy into a planet could cause atoms to rearrange themselves into computers and laser printers and the Internet.

Of course, if you can come up with a nice theory on how tornados could turn rubble into houses and cars, you can argue that the second law is not violated in scenario 1 either.

Elizabeth and KeithS, you are welcome to go back into your complaints about what an idiot Sewell is to think that dust spontaneously turning into computers and the Internet might violate “the basic principle behind the second law,” and how this bad paper shows that all of the Cornell contributions were bad, but please first give me another reason, other than the one I acknowledged, why there is a conflict with the second law (or at least the fundamental principle behind the second law) in scenario 1 and not in scenario 2? (Or perhaps you suddenly now don’t see any conflict with the second law in scenario 1 either, that is an acceptable answer, but now you are in conflict with the scientific consensus!)

And if you can’t think of another reason, what in my paper do you disagree with, it seems we are in complete agreement!!

## 387 Replies to “Where is the difference here?”

1. 1

Granville:

It is almost universally agreed in the scientific community that the second stage (but not the first) of scenario 1 would violate the second law of thermodynamics

No, it is not “almost universally agreed in the scientific community that the second stage (but not the first) of scenario 1 would violate the second law of thermodynamics.”

You have confused “order” as in low entropy with “order” as in “not chaos”.

You seem to think that a tidy house, or a computer, has less entropy than a messy house, or a computer after it has been sat on by an elephant.

It doesn’t.

And if you don’t think that, then why think that the appearance of “humans, cars, high-speed computers, libraries full of science texts and encyclopedias, TV sets, airplanes and spaceships” represents a reduction in entropy?

Do you?

There might be good arguments that such things require a designer, but the 2nd law of thermodynamics is not one of them, because their existence does not violate it.

2. 2
Collin says:

Granville,

May I suggest an addition to your theory (forgive me if you’ve already said this before): “It is evident that entropy is not leaving the earth fast enough to compensate for the order developing on the earth.” This assertion would require some kind of measurement of entropy and order and applying it over the billions of years of earth’s existence. That would be difficult, but I wonder if a good estimate is possible.

3. 3

Collin:

“It is evident that entropy is not leaving the earth fast enough to compensate for the order developing on the earth.”

But it isn’t evident. Granville has confused two meanings of the word “order” and claims that an “ordered” thing, like a pre-tornado house has less entropy than a “disordered” thing like a post tornado house.

It doesn’t. The post -tornado house could conceivably have less entropy, if the tornado had deposited lots of its stuff up a tree.

Sure it would be more “disordered” in the common or garden sense. But that isn’t the sense in which low entropy is order. If a thing has low entropy, that just means its bits are arranged in a way that they can do work. If the sofa ended up at the top of a tree, it could do work by making a hole in the ground when it fell back down. Before the tornado, it couldn’t.

4. 4
Granville Sewell says:

Elizabeth,

Well, perhaps “almost universally agreed” is a bit of an exaggeration, in the Bio-Complexity paper I acknowledge that “Although most general physics textbooks give examples of entropy increases that are difficult to quantify, such as wine glasses breaking or books burning, because it is more difficult to define an associated entropy precisely in scenario C, some scientists are reluctant to apply the second law to things like tornados.”

Certainly the first formulations of the second law, which were all about heat and energy, are not threatened by evolution. But nearly all general physics texts (thermodynamics texts, not so much, since they prefer quantifiable applications) do give examples of “entropy” increases (in the more general sense) which have nothing to do with heat or energy, such as tornados, rust, fire, glasses breaking, cars colliding, etc. Isaac Asimov, in the Smithsonian magazine, even talked about the entropy increase associated with a house becoming more messy (see my footnote 6 in my Cornell contribution).

So if I am confused in applying the more general formulations of the second law to things like tornados, I am at least in good company, as nearly all general physics textbooks do this, so I think it is quite unfair to say, as KeithS does, that I would be laughed out of any physics meeting.

5. 5

Granville:

Well, perhaps “almost universally agreed” is a bit of an exaggeration, in the Bio-Complexity paper I acknowledge that “Although most general physics textbooks give examples of entropy increases that are difficult to quantify, such as wine glasses breaking or books burning, because it is more difficult to define an associated entropy precisely in scenario C, some scientists are reluctant to apply the second law to things like tornados.”

It’s an exaggeration to the point of falsity, Granville. A computer simply does not have less entropy than the bits of the computer had before assembly (or after disassembly). Some artefacts do (a fridge, cement, a bridge, dynamite), but many don’t, and those that do don’t violate the second law, because the local entropy decrease is gained at the cost of entropy increase in the fuel used to run or build the artefact.

Of course it is true that a tornado is vastly less likely to build a nice duplex than a good builder is, but that isn’t because the builder violates the second law of thermodynamics. The 2nd Law of thermodynamics says nothing about “order” in the sense in which you are using the term.

Certainly the first formulations of the second law, which were all about heat and energy, are not threatened by evolution. But nearly all general physics texts (thermodynamics texts, not so much, since they prefer quantifiable applications) do give examples of “entropy” increases (in the more general sense) which have nothing to do with heat or energy, such as tornados, rust, fire, glasses breaking, cars colliding, etc. Isaac Asimov, in the Smithsonian magazine, even talked about the entropy increase associated with a house becoming more messy (see my footnote 6 in my Cornell contribution).

It’s perfectly true that any artefact, left to itself, unmaintained, will tend to increase in entropy – timbers will decay and fall, things will be taken down from high shelves and not put back, dust will grow thick on the floor, etc. All these effects do indeed represent an increase in entropy – a rearrangement of matter less able than formerly to do work. What is not true is that the same will result from a tornado, which may well lift books into trees and roofs up mountains, where they can do more work, not less. And Asimov also rightly says that a human brain has less entropy than the molecules of which it is composed – so has a tree. You can prove this by using either as fuel for a barbecue. But again, no violation of the 2nd Law has occurred, because brains grow by virtue of the food we eat (or our mothers eat) and trees grow by virtue of increasing entropy in the sun.

However, Asimov is in danger of making the same mistake as you are making, as is your other citee, Peter Urone. They do not actually make the same mistake, as it is true that human endeavours do sometimes result in local decreases of entropy, courtesy of increases in entropy used to achieve it, but they seem perilously closer to equating: “looks highly ordered and not chaotic” with “has low entropy”.

So if I am confused in applying the more general formulations of the second law to things like tornados, I am at least in good company, as nearly all general physics textbooks do this, so I think it is quite unfair to say, as KeithS does, that I would be laughed out of any physics meeting.

In your defense, it is true that basic physics text books are often very carelessly written in an attempt to create an accessible mental picture for the students, and the popular metaphor of “entropy” as “disorder” is highly misleading. There has been considerable concern about this in the pedagogical literature, for instance:
Disorder – A Cracked Crutch for Supporting Entropy Discussions

To aid students in visualizing an increase in entropy, many elementary chemistry texts use artists’ before-and-after drawings of groups of “orderly” molecules that become “disorderly”. This seems to be a useful visual support, but it can be so misleading as actually to be a failure-prone crutch. Ten examples illustrate the problem.Entropy is not disorder, not a measure of chaos, not a driving force. Energy’s diffusion or dispersal to more microstates is the driving force in chemistry. Entropy is the measure or index of that dispersal. In thermodynamics, the entropy of a substance increases when it is warmed because more thermal energy has been dispersed within it from the warmer surroundings. In contrast, when ideal gases or liquids are allowed to expand or to mix in a larger volume, the entropy increase is due to a greater dispersion of their original unchanged thermal energy. From a molecular viewpoint all such entropy increases involve the dispersal of energy over a greater number, or a more readily accessible set, of microstates. Frequently misleading, order-disorder as a description of entropy change is also an anachronism. It should be replaced by describing entropy change as energy dispersal–from a molecular viewpoint, by changes in molecular motions and occupancy of microstates.

Undergraduate students’ understandings of entropy and Gibbs free energy

Only a small minority of students showed a ‘sound understanding’ of the concept of entropy and Gibbs free energy. This weak understanding was mirrored in Pinto’s14study of Catalan undergraduate physics students, who did not connect different aspects of the Second Law with one another and few students used entropy to explain everyday processes. Sozbilir13found that students attempted to explain entropy as ‘disorder’. However, ‘almost all of the respondents defined entropy from the visual disorder point of view, indicating chaos, randomness or instability in some cases’. He found that the term ‘disorder’ was used to refer to movement, collision of particles and the extent to which things were ‘mixed up’. Students did not have clear understanding of enthalpy and energy of a system and seemed to confuse the kinetic energy of a system and entropy. Some also confused enthalpy with Gibbs free energy. Similar findings were found by Ribiero et al15and Selepe and Bradley.17 No student used microstates to explain disorder (Sozbilir13).

6. 6
Granville Sewell says:

Elizabeth,

The first formulations of the second law were only about heat and energy, the later ones are more general. The second law applies to the diffusion of anything that diffuses, not just heat, that is acknowledged by everyone, the same equations apply, the same definition of entropy applies (except you replace temperature by concentration of the diffusing element). So to claim that the second law is only about energy is absolutely false, it applies in quantitative form to many other things that have nothing to do with heat or energy, see footnote 1 in my BIO-Complexity paper.

And most people, but not everyone, further generalize it to things like books burning, tornados destroying houses, etc., to claim that such generalizations are not valid depends on which of several statements of the second law you use.

7. 7
Andre says:

I must admit, I have great difficulty understanding what Dr Liddle is trying to say. The earth is losing more than what it is receiving. I think currently the earth is losing about 50 000 tonnes of mass per year and only gaining 40 000 tonnes. If we are losing more that we get can anyone explain to me how its possible for that atoms could become humans, cars and computers

The question is… are we really and open system as consensus would like us to believe? If we have net loss how on earth is humans even possible?

8. 8

Elizabeth,

The first formulations of the second law were only about heat and energy, the later ones are more general. The second law applies to the diffusion of anything that diffuses, not just heat, that is acknowledged by everyone, the same equations apply, the same definition of entropy applies (except you replace temperature by concentration of the diffusing element). So to claim that the second law is only about energy is absolutely false, it applies in quantitative form to many other things that have nothing to do with heat or energy, see footnote 1 in my BIO-Complexity paper.

In your paper, you equate “entropy” with “randomness”, and argue that randomness can apply to macroscopic objects as well as atoms and molecules. Unfortunately you do not define “randomness”. However, you do imply that by “more random” you mean “more uniformly diffused”, which is fine. In that sense, higher entropy means more uniformity. However, a tornado does not necessarily result in a more uniformly diffused distribution of contents. It could well result in a greater concentration of them (messy though that concentration would be). And if we define “low entropy” as a configuration more able to do work, then if some of the stuff is up a tree, then it can do work than it could when it was neatly arranged on the floor of the house.

Similarly, when we build a computer, we do not necessarily end up with a more, or less, uniform distribution of contents than we started with. We may have gathered together the requisite parts from many regions, but we have used fuel to do so. So again, no 2nd law violation has occurred.

And most people, but not everyone, further generalize it to things like books burning, tornados destroying houses, etc., to claim that such generalizations are not valid depends on which of several statements of the second law you use.

Well, a burnt book clearly has more thermodynamic entropy than an unburnt book. But a book does not necessarily have more less thermodynamic entropy than the pieces of paper and ink that it consisted of before.

And I’m really tired of being talked down to, as though I know nothing of the topic and you need to bring me up to speed. I wish someone would actually read my papers before commenting on them.

I have read your MI paper, your BIOcomplexity paper, and your New Perspectives paper, each several times, in great detail. I apologise if I seem to “talk down” to you as though you “know nothing of the topic”, but as far I can tell you have made a major error of understanding. It is not “talking down” to someone to point out that they have made an error.

Yes, you can apply the 2nd Law to macroscopically diffusing things like an unmaintained house, in which, over time, bits of it end up on the floor, including the roof and the contents, where they have less potential energy, and thus less capacity to do work. But you can’t then claim that tidying up that house violates the 2nd law, because tidying up that second law can only be done by something that is fueled (e.g. me), and that fuel will increase in entropy as I burn it up during my tidying.

A system that has low entropy is in a configuration that cannot easily do work. A house is not “low entropy” when it is tidy and “high entropy” when it is messy. If my son puts a booby-trap on his bedroom door, so that all his soft toys (or worse) fall on my head when I come in, he might have messed up his room to do it, but the room has lower entropy when his toys are on top of the door than when they were on the shelves.

You are clearly a fine mathematician, Granville, but you seem to have learned your physics from out-of-date and elementary text books. That’s why your paper is wrong. The 2nd Law of thermodynamics is not violated when someone makes a computer or tidies up a house, because building a computer or house does not usually result in significantly lower entropy systems than their ingredients (and may be higher), plus, even if when they do (and building a fridge does create a lower entropy system) the 2nd Law would still not have been violated, because these things require fuel to run and build, and using fuel increases entropy.

And indeed a tornado itself is an excellent example of local decrease in entropy. Do you think that tornadoes violate the 2nd Law of thermodynamics?

Thermodynamic entropy is not the same thing as chaos. Quite the opposite. Chaotic systems (like tornadoes) represent a decrease in local entropy, not an increase.

9. 9

Andre:

I must admit, I have great difficulty understanding what Dr Liddle is trying to say.

Well, the 2nd Law of Thermodynamics isn’t as easy as it looks!

But essentially, what it says is that within a closed system the amount of work that can be done is fixed, and as a result, after that work is done, there is less that can still be done.

We can think of work as “heat”, so that if we raise the temperature of one part of the system, we inevitably cool it by a slightly greater amount in some other part, and so the average temperature of the system is always dropping. So a system that starts hot can do more work before it reaches maximum entropy than a system that starts cold, but neither can do anything to increase the total amount of work that can be done.

Or we could think of work as, say, altering things. A system in which all the objects are on a high shelf can alter things by falling off and making a dent in the floor. But once all the objects are on the floor, no more work can be done.

However, in an open system, the amount of work that can be done by the system can increase – we could heat the system up again, or put all the objects back on a high shelf. But to “fund” that work, we’d need to get it from some other system, and as a result, that other system would have a reduced amount of work it could still do.

On earth, we are wide open to the sun, and so it’s probably easier to think of the earth-sun as a one system. We can certainly make things hotter on earth, by putting them in the sun. But that hasn’t violated the 2nd Law because the reason the sun can do this is that it is doing work, and, as a result, cooling down.

The sense in which entropy is increased when work is done is simply that the system adopts a more uniform configuration. It might look more “ordered”, or it might look less “ordered”. But what it will be, is in some sense, more uniform. Ultimately, the system will approach cold uniformity – no hot spots, no lumps.

Designing things sometimes makes the parts of the system hotter, or lumpier, but often not, and even when it does, it is at the cost of cooling and increase in uniformity elsewhere. Tornadoes can also make things hotter or lumpier, also endothermic chemical reactions – the capacity to make something hotter or lumpier is not unique to designers or designed things, and again, the 2nd Law is not violated when it happens.

The earth is losing more than what it is receiving. I think currently the earth is losing about 50 000 tonnes of mass per year and only gaining 40 000 tonnes. If we are losing more that we get can anyone explain to me how its possible for that atoms could become humans, cars and computers

I don’t see how that would stop us. We aren’t running short of material yet!

The question is… are we really and open system as consensus would like us to believe? If we have net loss how on earth is humans even possible?

We are “open” in the sense that we are warmed by the sun. The solar system itself is fairly closed, although it does receive energy from other stars, and material from outside the system.

10. 10

Prof. Sewell,

You debate evolutionists trying to explain the obvious: anything material tend spontaneously towards disorganization. In fact, this is an interpretation of the 2nd law: systems spontaneously go towards the more probable states. The more probable states are those more numerous, then those disorganized.

In very simple terms this is the question.

Evolutionists disagree because they believe that biological systems spontaneously organize themselves. That is, exactly the inverse of what the 2nd law states.

11. 11

Granville,

First of all, thank you for posting this and allowing comments. I think this is an interesting topic that merits discussion, but many of your posts in the past have been closed to comments.

Second, I realize you are tired of talking about it, but I hope you will be patient with us and will have some time to stick with this thread for a bit as the discussion unfolds.

Thank you,

12. 12
Andre says:

Dr Liddle

Think about this…. entropy is increasing, if entropy is constantly increasing due to mass loss how is humans, cars and computers possible? The energy we receive from the sun does not negate the loss of matter. There is no equilibrium as you would like to believe.

13. 13
Granville Sewell says:

Elizabeth,

Please note my footnote 1 in the Bio-Complexity paper:

There are many thermodynamic entropies, corresponding to different degrees of experimental discrimination and different choices of parameters. For example, there will be an increase of entropy by mixing samples of O-16 and O-18 only if isotopes are experimentally distinguished. (R. Carnap, Two Essays on Entropy, Univ. of California Press 1977.)

Do you claim that the entropy which increases when two oxygen isotopes mix, mentioned by Carnap, is really just thermal entropy, which you seem to think is the only legitimate entropy? Or is he just confused too?

What about the “chromium entropy” in an isolated solid, which increases when chromium becomes more uniformly distributed (scenario B in the Bio-Complexity paper)? That is guided by exactly the same equations as thermal entropy, the only difference is chromium is diffusing instead of heat. Do you claim that chromium diffusion has nothing to do with the second law? Or that chromium diffusion is really all about thermal entropy also?

And of course I don’t agree that tornados decrease entropy, they increase it. Tornados running backward decrease entropy, and the “entropy” I am talking about has little or nothing to do with thermal entropy.

The second law, in its more general forms, is about applying probability at the microscopic level to predict macroscopic change, not just about thermal entropy. The application to thermal entropy is only one of many applications. Obviously, you are using only statement (1) of the second law, given in my Bio-Complexity paper, I am using the more general statements (2) and (3).

If you believe that tornados turning a town into rubble “decrease” entropy, we are obviously going to disagree about the application of the second law to evolution! Won’t you at least admit that there are a lot of good physicists (the majority, I’m sure) who would say that tornados increase entropy, in the more general sense, so I am in good company, I am not just reading out of date texts.

14. 14
keiths says:

Granville,

Since my Cornell conference contribution has generated dozens of critical comments on another thread, I feel compelled to respond.

15. 15
keiths says:

A comment from last night, cross-posted from the other thread:

Timaeus,

Scientific papers are judged by their contents. The contents of Granville’s paper are awful. Based on those contents, and using Granville’s own words, I have shown that Granville:

1. Mistakenly asserts that “the increase in order which has occurred on Earth seems to violate the underlying principle behind the second law of thermodynamics, in a spectacular way.”

2. Titles his paper Entropy, Evolution and Open Systems without realizing that the second law is actually irrelevant to his improbability argument, since it is not violated by evolution.

3. Misunderstands the compensation argument and incorrectly rejects it.

4. Fails to understand that the compensation argument is a direct consequence of the second law, and that by rejecting it he is rejecting the second law itself!

5. Fails to realize that if the compensation argument were invalid, as he claims, then plants would violate the second law whenever their entropy decreased.

6. Asserts, with no evidence, that physics alone cannot explain the appearance of complex artifacts on Earth.

7. Offers, as evidence for the above, a thought experiment involving a simulation he can neither run nor analyze.

8. Declares, despite being unable to run or analyze the simulation, that he is “certain” of the outcome, and that it supports his thesis.

9. Confuses negentropy with complexity, as Lizzie explained.

10. Conflates entropy with disorder, as Lizzie explained.

Granville was unable to defend his paper, so he bailed out of the thread. You are now retreating also — probably a wise move. It remains to be seen what Eric and CS3 will do.

If Lizzie and I are able to expose egregious faults in Granville’s paper, using his own words, and none of you are capable of defending it, then how can you claim that his paper was good science that deserved to be accepted by the BI organizers?

By accepting Granville’s paper, the organizers showed that the BI was not a serious scientific conference. Springer did the right thing in refusing to publish.

16. 16
Mark Frank says:

Granville

I am jumping in here without having followed the debate because it is the opportunity to ask a very basic question. Are you proposing that the 2nd law of thermodynamics is not always true because an intelligent mind can overcome it? Surely, if you can prove this, it puts you in line for a Nobel prize.

17. 17

Prof. Sewell,

the good news: indeed the funny reactions of evolutionists are proof that they fear the 2nd law as fire. They apply all their best tactics to hidden the thing: the 2nd law disproves evolution. Please, continue tirelessly your battle, thank you.

18. 18

Elizabeth:

However, a tornado does not necessarily result in a more uniformly diffused distribution of contents. It could well result in a greater concentration of them (messy though that concentration would be). And if we define “low entropy” as a configuration more able to do work, then if some of the stuff is up a tree, then it can do work than it could when it was neatly arranged on the floor of the house.

Well, all you have said, really, is that a tornado can convert some of its kinetic energy into potential energy. Big deal. So some of the tornado’s original energy is now available as potential energy (your stuff up in a tree). In other words, some of the tornado’s energy has not fully dissipated yet, but is still available for a temporary time in another form.

But you haven’t demonstrated at all how this could possibly apply to help explain the formation of, say, living systems.

It sounds to me like you and Granville are talking past each other because (i) you are saying that the 2nd law only applies to head distribution and nothing else, and (ii) Granville is saying that it applies more broadly and that this broader application is well-known.

19. 19

Elizabeth:

On earth, we are wide open to the sun, and so it’s probably easier to think of the earth-sun as a one system.

I was saving this point for later, but Elizabeth beat me to it! 🙂

This is one of the reasons why the “explanations” for how evolution could occur that rely on “because the Earth is an open system” are absolutely absurd. The openness or closedness of the Earth system is utterly irrelevant to the question of whether molecules can spontaneously come together to form life, or whether simple organisms can evolve into more complex organisms. To repeat: It is a complete red herring. As is the whole “compensation” idea.

20. 20

Mark Frank and keiths:

Stop being silly. No-one is arguing that the 2nd law has been violated. Granville has not made that argument either.

As I understand it, what he is saying is that if what evolutionists claim occurred (natural abiogenesis, significant increases in organismal structure and complexity, etc.) in fact occurred, that it would contradict our uniform and repeated experience and would contradict the underlying principle of the 2nd law. Nothing to do with heat; nothing to do with open vs. closed; nothing to do with alleged compensation of entropy elsewhere.

Granville is not arguing that evolution shows the 2nd law has been violated. Rather, that the 2nd law shows that evolution (at least the key alleged macro events) is unlikely to the point of being a non-starter.

21. 21
keiths says:

The compensation idea is, frankly, silly. The reason a tree can exist has nothing to do with the fact that the Earth is an open system and the tree’s reduced entropy is “compensated” by increased entropy at the sun. Otherwise, tell me, please, what physical mechanism alerts the Sun to the fact that there is a tree growing on the Earth and, therefore, the Sun should increase its entropy?

Eric,

You have misunderstood the compensation argument. The argument is valid, but your understanding of it isn’t.

In the case of Earth, the compensation is not taking place on the Sun. In fact, the radiation emitted by the Sun actually works toward reducing its own entropy. Instead, the compensation happens because Earth is radiating energy out into its surroundings.

How do the surroundings “know” that they should increase their entropy? Because they receive the radiation from the Earth.

I’ll cross-post some of my remarks about the compensation argument below.

22. 22
cantor says:

KeithS wrote:
`The contents of Granville’s paper are awful. Based on those contents, and using Granville’s own words, I have shown that Granville...`

You have shown no such thing. You are taking down a strawman of your own making.

It seems 2LoT is the new “evolution”. It has many different meanings, and is too easily equivocated as Liddle & KeithS do.

If you have some constructive criticism of what Sewell actually wrote in his paper, we’d all like to hear it. Otherwise, please take your strawman arguments over to Panda’s Thumb or Why Evolution is true or Pharyngula where they’ll fit right in.

23. 23
keiths says:

Eric,

Stop being silly. No-one is arguing that the 2nd law has been violated. Granville has not made that argument either.

Well, Granville is right here. Let’s ask him.

Meanwhile, look at what he wrote in the OP:

Elizabeth and KeithS,
…please first give me another reason, other than the one I acknowledged, why there is a conflict with the second law (or at least the fundamental principle behind the second law) in scenario 1 [a tornado turning rubble into pristine houses and cars] and not in scenario 2 [evolution and its downstream consequences]? (Or perhaps you suddenly now don’t see any conflict with the second law in scenario 1 either, that is an acceptable answer, but now you are in conflict with the scientific consensus!)

24. 24
Andre says:

keiths

Your 10 reasons are bunked, here is one…

A net loss of mass which is related to energy means an increase in entropy.

25. 25
keiths says:

Andre,

I have no idea what you are trying to say. Could you rephrase that?

26. 26
Granville Sewell says:

KeithS,

By accepting Granville’s paper, the organizers showed that the BI was not a serious scientific conference. Springer did the right thing in refusing to publish

Obviously I don’t agree, but even if your opinion of my paper is correct, out of 24 papers there are always going to be a weak one or two, that does not mean the others were bad. Two of the best were by Jon Wells:

1. Not junk after all: non-protein-coding DNA carries extensive biological information

2. The membrane code: a carrier of essential information that is not specified by DNA and is inherited apart from it.

Wells’ points in both of these talks are being proved spectaculary correct by evidence accumulated recently. Why don’t we talk about these a little?

In any case, Nick and the others who demanded Springer withdraw the book, didn’t even know I was a participant, or what any of the talks were about, for that matter.

27. 27
keiths says:

cantor,

If you disagree with me, you are welcome to show exactly where and why you think I am mistaken.

Just quote the points you disagree with and describe what you think my errors are. Then tell us what you think is correct and why.

28. 28
keiths says:

A comment on the compensation argument from the other thread:

For anyone who still doesn’t get it, here is an explanation of Granville’s biggest error.

The compensation argument says that entropy can decrease in a system as long as there is a sufficiently large net export of entropy from the system.

Granville misinterpets the compensation argument as saying that anything, no matter how improbable, can happen in a system as long as the above criterion is met.

This is obviously wrong, so Granville concludes that the compensation argument is invalid. In reality, only his interpretation of the compensation argument is invalid. The compensation argument itself is perfectly valid.

The compensation argument shows that evolution doesn’t violate the second law. It does not say whether evolution happened; that is a different argument.

Granville confuses the two issues because of his misunderstanding of the compensation argument.

Since the second law isn’t violated, it has no further relevance. Granville is skeptical of evolution, but his skepticism has nothing to do with the second law.

He is just like every other IDer and creationist: an evolution skeptic.

You can see why this is a huge disappointment to him. Imagine if he had actually succeeded in showing that evolution violated a fundamental law of nature!

29. 29
keiths says:

Notice that his rejection of the compensation argument implies that plants violate the second law by using sunlight to grow. Thus the cornstalks shooting up in my home state of Indiana are cosmic scofflaws, according to Granville’s view.

If he’s right, then we’re surrounded by violations of the second law. Now do you begin to see why scientists find Granville’s position ridiculous?

Granville rejects the compensation argument without realizing that he thereby implicates plants as second lawbreakers.

Is there anyone here who thinks this is good science and wants to defend it? Do you think we are surrounded by plants, all busily violating the second law as they photosynthesize?

The compensation argument is perfectly valid, as any plant will tell you.

30. 30

Cantor:

It seems 2LoT is the new “evolution”. It has many different meanings, and is too easily equivocated as Liddle & KeithS do.

No, it doesn’t. It has one very precise meaning, although it can be stated in several different ways. All these statements, however, are equivalent.

What is NOT equivalent, is the word “disorder” in the sentence”: “entropy is disorder”; and the word “disorder” in the sentence “my house is in a state of disorder”.

The equivocation, though I am sure not deliberate, is Granville’s, not ours.

A chaotic system, like a tornado, has lower entropy than still air.

As a result, it is able to do work such as reduce towns to rubble and elevate large objects into trees, where they in turn can do further work when they fall out again.

People are also low-entropy systems, which means that they can also do work, including tidying up houses, and clearing away the rubble after a tornado.

It is perfectly possible for a system to include areas of local decreases in entropy, such as people, and tornadoes – what would be a violation of the 2nd Law would be if these did not result from some process that involves an overall increase in entropy – which they do. Tornadoes form at the cost of increased entropy – reduced temperature differential between the Gulf of Mexico and the Canada. People form at the cost of increased entropy in the food they consume, which themselves are local entropy decrease gained by virtue of the steadily increasing entropy in the sun.

31. 31
RodW says:

Dr. Sewell,

If I recall correctly one of your original points in all of this was that a contention, originally put forward by Isaac Asimov- namely that the origin and evolution of life was only possible because earth is an open system – is wrong. I agree, its wrong, but I dont think you can use it to support your case. I think you conflate entropy with complexity. The 2 need not be related. Consider a volume containing random steel balls in motion. It doesnt matter if this is a closed or open system, complexity will never be generated. On the other hand, if it contains shapes, say a trefoil, with magnets on the tips, it will generate interesting patterns over time, regardless if its an open system or not. If a system has the inherant abititly to generate complexity and its an open system that opens the possibility that further complexity can be generated by an iterative process.
Heres a simple case in point. Start with several cubic parsecs of diffuse elemental hydrogen. This is about as dull and uncomplex a state as one can imagine. Make this effectively a closed system and wait 15 billion years. At the end of that time you’ll find a variety of complex stars with still more complex planetary systems composed of rocky and gaseous plants. Planets like Jupiter and Saturn are not only very complex objects, they’re also quite beautiful and all of this comes as entropy increases. Its effectively a closed system since neither matter nor energy need to come from the outside.
RodW

32. 32

Eric

This is one of the reasons why the “explanations” for how evolution could occur that rely on “because the Earth is an open system” are absolutely absurd. The openness or closedness of the Earth system is utterly irrelevant to the question of whether molecules can spontaneously come together to form life, or whether simple organisms can evolve into more complex organisms. To repeat: It is a complete red herring. As is the whole “compensation” idea.

It is. The solar system can be regarded as an almost closed system, but it is easily low-entropy enough to power chemical reactions, as we can see.

The 2nd Law is entirely irrelevant to the question as to whether life could form spontaneously.

33. 33
keiths says:

Lizzie,

One correction. You wrote:

People form at the cost of increased entropy in the food they consume, which themselves are local entropy decrease gained by virtue of the steadily increasing entropy in the sun.

It’s not the entropy increase of the sun that compensates for the entropy decrease of food production. Rather, it’s the entropy increase of Earth’s surroundings.

I explained this to Eric here.

34. 34

“The 2nd Law is entirely irrelevant to the question as to whether life could form spontaneously.”

Indeed the opposite. The 2nd law expresses the universal tendency towards probable states. Naturalistic origin of life implies extremely improbable states, then is exactly in the opposite direction.

35. 35

Thanks, keiths 🙂

36. 36

Indeed the opposite. The 2nd law expresses the universal tendency towards probable states. Naturalistic origin of life implies extremely improbable states, then is exactly in the opposite direction.

OK, go on. why is there a “universal tendency towards probable states”?

37. 37
keiths says:

Naturalistic origin of life implies extremely improbable states…

How do you know that?

The 2nd law expresses the universal tendency towards probable states…

But it allows for local states that would otherwise be improbable, as long as sufficient entropy is exported to the surroundings.

Don’t fall into Granville’s trap of equating probability arguments with second law arguments. They are distinct, though there is some overlap.

I don’t need to invoke the second law to explain why I am unlikely to win the lottery.

38. 38

OK, go on. why is there a “universal tendency towards probable states”?

This is like to ask why the laws of physics are those that are and not different. We can like them or not, but anyway the Law-giver stated them.

39. 39

“Naturalistic origin of life implies extremely improbable states… How do you know that?”

Like all IDers and evolutionists, I know that because biological machineries involve configurations that are rare if related to the large amount of non living configurations.

40. 40

This is like to ask why the laws of physics are those that are and not different. We can like them or not, but anyway the Law-giver stated them.

Well, no. There’s a perfectly simple answer, which is that probable states are, by definition, more numerous than improbable states, and so it is more probable that a state will change from a less common state than a more common state than that it will change from a more common state to a less common state 🙂

In other words, it’s a tautology:

More probable things are more probable, therefore they are more probable 🙂

The usual statement of this formulation of the law (and they are all exactly equivalent) is as given in Granville’s BIOcomplexity paper:

In an isolated system, the direction of spontaneous change is from an arrangement of lesser probability to an arrangement of greater probability

Which is of course nearly as tautological. It only makes sense if we examine what is actually meant by “probable”.

A “probability” is not actually a property of a state, but a property of the processes that cause such states. For example, when you toss 100 coins (I daren’t say 500!), it is much more probable that you will get approximately equal numbers of heads and tails than extreme ratios. So if you stir up a bunch of coins that have been laid down mostly Heads, it is far more likely that you will end up with a more even ratio than a more extreme one.

However, if you have a bunch of rectangular legos on a tray, and jiggle them up, you will tend to end up with more lying on their sides than standing tall, because it’s more likely that a standing one will fall over when you jiggle the tray than that a lying down one will stand up.

In other words the most “probable” state of an arrangement of things isn’t a straightforward issue.

We need to know something about the things and the way they are most likely to arrange themselves under a given set of conditions before we can say that one arrangement is more probable than another.

So that statement of the 2nd Law is rather loose. However, it is meaningful, because what we can say, over all, is that smoother, flatter, lower contrast arrangements of stuff are more numerous than rougher, lumpier, high contrast arrangements. There are more ways to arrange a set of things where the state of one thing is uncorrelated with the state of thing next to it, than there are to arrange them so that they are correlated. This is why gasses diffuse – because there are more ways in which to make the position of one molecule uncorrelated with the position of the next, than ways in which they are correlated.

And so, what that statement of the 2nd Law is saying is that smoother, lower contrast states are more probable than lumpier, higher contrast states, and so the spontaneous tendency will be towards smoother states, and these states we call “high entropy” states.

However, there is nothing to stop a local increase in lumpiness occurring at the cost of increased smoothness in an adjacent region. This is why tornadoes form – tornadoes are low entropy regions that are generated as two contrasting volumes of air, one hot and one gold, mix, the final result being a smoother, more even-temperatured mass of air. For example, if I tip the tray of legos, many may stand on their ends, and most will congregate at one end. But that has happened because I did something that made my own arrangement of stuff a little more smoothly arranged – my sugar molecules denatured, for instance.

41. 41
keiths says:

The question isn’t the size of the configuration space of living configurations vs. non-living configurations. The question is “how likely was the formation of a primordial replicator that could kickstart the evolution process?”

Anyway, that’s a separate topic. My main point was to urge you not to confuse probability arguments in general with second law arguments in particular.

Nothing about the second law prohibits a local decrease in entropy, such as you might find in the case of OOL, as long as the system exports a sufficient amount of entropy to its surroundings.

That is the gist of the compensation argument that Granville misunderstands so badly.

42. 42

keiths: “Nothing about the second law prohibits a local decrease in entropy, such as you might find in the case of OOL, as long as the system exports a sufficient amount of entropy to its surroundings.”

Elizabeth B Liddle: “However, there is nothing to stop a local increase in lumpiness occurring at the cost of increased smoothness in an adjacent region.”

Same error. It is like to claim that here can happen a low probability event because there an high probability event happens. Non sequitur.

43. 43

Same error. It is like to claim that here can happen a low probability event because there an high probability event happens. Non sequitur.

I’m not sure what you are saying. Can you rephrase?

And when you do, can you make it clear how you are computing the probability of the events?

44. 44
keiths says:

Granville, you wrote in the OP:

Well, this same compensation argument can equally well be used to argue that the second tornado in scenario 1 [the one turns rubble back into houses and cars] does not violate the second law: the Earth is an open system, tornados receive their energy from the sun, any decrease in entropy due to a tornado that turns rubble into houses and cars is easily compensated by increases outside the Earth.

First, as Lizzie explained, you are inadvertently equivocating on the meaning of ‘disorder’:

What is NOT equivalent, is the word “disorder” in the sentence”: “entropy is disorder”; and the word “disorder” in the sentence “my house is in a state of disorder”.

The equivocation, though I am sure not deliberate, is Granville’s, not ours.

But let’s pretend for the sake of argument that you are correct and that ‘disorder’ in the sense of rubble is equivalent to ‘disorder’ in the sense of entropy.

Under that (false) assumption, it is actually true that the tornado doesn’t violate the second law if it reassembles rubble into houses and cars. It’s enormously improbable, and we would be astonished if it happened, but that’s not because it would violate the second law. We would be astonished for different reasons.

You keep making this mistake. You want the second law to rule out everything you think is too improbable, including evolution. But the second law rules out nothing but violations of the second law, just as the first law rules out nothing but violations of the first law. Obviously.

Evolution doesn’t violate the second law, so the second law is irrelevant to your doubts about evolution. So not only is your paper shot through with errors, it is also misnamed. Entropy, Evolution and Open Systems is a mistaken description. You didn’t even get the title of your paper right.

Granville, you might be a fine mathematician for all I know, but you definitely haven’t mastered physics. This paper is abysmal and should never have been accepted by the symposium organizers.

45. 45
Granville Sewell says:

Elizabeth and KeithS,

In my opinion, two of the best talks at Cornell were given by Jon Wells, one about the myth of junk DNA (“Not junk after all…”), the other about epigenetics (“The membrane code…”). Why don’t you download
these and see if you think Springer was justified in canning their publication. His points in these talks are being spectacularly confirmed by recent discoveries. (See chapter 14, and pp400-2 of “Darwin’s Doubt” for example.) Do you really think these were inferior papers?

46. 46

I’d like to ask Granville three questions, but anyone else feel free to answer:

2. Does a tornado represent a lower or higher entropy state than still air?

3. Is a tornado more ordered than still air?

47. 47

You evolutionists use the word “entropy” like a magic word, sort of “abracadabra”, able to import/export organization, like organization were a fluid at no cost, which can pass from a container to another.

Organization always entails low probability states. These probabilities cannot be imported/exported as matter/energy. If OOL on Earth has low probability, this probability doesn’t become reachable by chance and necessity also if we consider the solar system overall.

So, to say that OOL can happen on Earth because the Sun sends energy is nonsense. Energy alone has the power to organize nothing. Energy can power chemical reactions, but per se is not the first cause of biological organization.

48. 48
Upright BiPed says:

But it allows for local states that would otherwise be improbable, as long as sufficient entropy is exported to the surroundings.

>> Hey! How did that turtle get on the fencepost?

>> Uh. There was sufficient entropy in its surroundings.

49. 49
Granville Sewell says:

Elizabeth,

In all my references to tornados, I’m focusing on what it does to the town it hits, not the tornado itself. A video of a real tornado shows entropy increasing, because a pile of rubble is a more probable state than a town of houses and cars. There are many more ways to arrange pieces of wood and metal into rubble, than into houses and cars. A video of a tornado running backward shows entropy decreasing, and shows a violation of the second law, because the town is going from a more probable state to a much less probable state. And it doesn’t matter how much entropy is increasing outside the Earth, it is still extremely improbable that a tornado will turn rubble into houses and cars.

50. 50
51. 51

Boys (keiths, Elizabeth B Liddle et al. evolutionists),

stop to defend the indefensible, to argue the unarguable, and finally pass to the ID camp..
Also, Prof. Sewell has promised: he will even graduate you in Thermodynamics. 🙂

52. 52
cantor says:

A video of a real tornado shows entropy increasing, because a pile of rubble is a more probable state than a town of houses and cars.

In the statement above, please clarify what kind of entropy you are referring to. Hopefully that will help keep the discussion focused on point.

53. 53

You evolutionists use the word “entropy” like a magic word, sort of “abracadabra”, able to import/export organization, like organization were a fluid at no cost, which can pass from a container to another.

Not at all. It has a very precise meaning. It is Granville, I suggest, who is using it as “magic word” – or rather he is using “order” as a magic word, taking its meaning in one context and applying it to another.

54. 54

“Entropy…It has a very precise meaning.”

Comes to mind the old joke: Claude Shannon meets John von Neumann.
Shannon: “John, I have problems with my last paper”.
Von Neumann: “Why, what is it about?”.
Shannon: “It is on information and entropy”.
Von Neumann: “Claude, don’t worry, write anything you like, nobody understands what entropy is”…

55. 55
keiths says:

Granville Sewell:

In my opinion, two of the best talks at Cornell were given by Jon Wells…Why don’t you download
these and see if you think Springer was justified in canning their publication.

Granville, you started this thread so that we could discuss your paper. Why are you trying to change the subject?

56. 56
keiths says:

keiths:

But it [the second law] allows for local states that would otherwise be improbable, as long as sufficient entropy is exported to the surroundings.

Upright Biped:

>> Hey! How did that turtle get on the fencepost?

>> Uh. There was sufficient entropy in its surroundings.

Upright,

You’re making exactly the same mistake as Granville. To say that something is allowed by the second law does not mean that it is probable.

You can’t conclude that something is probable if it doesn’t violate the first law. Why would it be any different for the second law?

See this comment.

57. 57
keiths says:

cantor:

the radiation emitted by the Sun actually works toward reducing its own entropy.

You two need to fix this.

cantor,

You’re behind. I already pointed that out to Lizzie and she already accepted the correction.

I don’t know what your other two links are supposed to mean.

58. 58
Upright BiPed says:

psssssst … hey keith

(I’m not really interested in what’s merely possible in the minds of our opponents. Moreover, the second law doesn’t impact my interest not because the earth is an open system, but because the information that organizes life on this planet is instanitated in local symbol system which is not determined by thermodynamic law).

59. 59
keiths says:

Granville Sewell:

A video of a real tornado shows entropy increasing, because a pile of rubble is a more probable state than a town of houses and cars.

cantor:

In the statement above, please clarify what kind of entropy you are referring to. Hopefully that will help keep the discussion focused on point.

cantor,

Yes, that’s exactly the problem. As Lizzie has been patiently explaining to him, Granville mistakenly thinks that ‘entropy’ means ‘disorder’, and that anything that seems disordered, like a pile of rubble after a tornado, must therefore have high entropy.

The second law is a law of thermodynamics. Granville doesn’t like that, because that means that the second law doesn’t rule out evolution.

To reach his desired conclusion, Granville therefore attempts to argue that the second law prohibits any extremely improbable event.

That’s a silly conflation. As I said earlier, I don’t need to invoke the second law to explain why I am unlikely to win the lottery.

That same confusion pervades Granville’s paper.

60. 60

Granville:

Elizabeth,

In all my references to tornados, I’m focusing on what it does to the town it hits, not the tornado itself.

I know, and that’s why I’m asking you about the tornado. If a tornado, which is a low entropy system, and thus an “improbable” system, but also an undesigned system, can form without violating the 2nd Law, why not other low entropy systems, such as houses and people?

But clearly I do not dispute that tornadoes cannot turn rubble into houses! My point is that it isn’t the 2nd Law that prevents them, it’s simply that that’s not what tornadoes do.

The can do other low-probability things, though.

61. 61
keiths says:

Upright Biped:

Moreover, the second law doesn’t impact my interest not because the earth is an open system, but because the information that organizes life on this planet is instanitated in local symbol system which is not determined by thermodynamic law).

Upright,

If the second law doesn’t “impact your interest”, then why are you attempting to comment on entropy and getting it so badly wrong?

I guess your lack of interest explains why you never bothered to learn about entropy or the second law.

62. 62
Upright BiPed says:

63. 63

Comes to mind the old joke: Claude Shannon meets John von Neumann.
Shannon: “John, I have problems with my last paper”.
Von Neumann: “Why, what is it about?”.
Shannon: “It is on information and entropy”.
Von Neumann: “Claude, don’t worry, write anything you like, nobody understands what entropy is”…

Indeed.

What is interesting is that Shannon entropy, which is mathematically very closely related to thermodynamic entropy, is actually negatively correlated with it. As thermodynamic entropy goes up, Shannon entropy goes down.

The more ways there are of rearranging the system, the greater the Shannon entropy, and the lower the thermodynamic entropy. At near absolute zero, thermodynamic entropy is very high, and Shannon entropy is very low.

This is what Granville seems not to understand – that a state of very high thermodynamic entropy is not “highly disordered”, but more akin to 99 Heads and one Tail – precisely the “low probability” pattern people were inferring “Design” from in the another thread!

ID authors really needs to figure out what they mean by “probability”!

As I keep asking my students: “probability of what, given what?”

64. 64
keiths says:

Upright,

My joke was better, even if you didn’t get it.

Yes, pretend that it was merely a joke, and that you weren’t actually trying to make a point. A point that was promptly swatted down.

65. 65
Alan Fox says:

You may not be the most unbiased judge of your own jokes!

66. 66
Upright BiPed says:

How did that turtle get on the fencepost?
There was sufficient entropy in its surroundings.

“promptly swatted down”

“Promptly Swatted Down!!”

. . .

have a good holiday keith

67. 67
68. 68
keiths says:

cantor,

I’m talking about Granville’s paper in that comment. That’s the point: Granville’s paper is terrible. Springer was right not to publish.

The BI symposium organizers should have rejected Granville’s paper, but they didn’t.

69. 69
keiths says:

have a good holiday keith

You too, Upright.

70. 70
cantor says:

`cantor,`

``` ```

`You’re behind. I already pointed that out to Lizzie and she already accepted the correction`

She may have “accepted the correction”, but she didn’t fix her argument.

`I don’t know what your other two links are supposed to mean.`

Perhaps if I paraphrase them?

Link1: When something does work its entropy increases

71. 71
cantor says:

`I’m talking about Granville’s paper in that comment. That’s the point: Granville’s paper is terrible. Springer was right not to publish.`

In that comment you were affirming Springer’s decision not to publish the proceedings. Granville’s post was replying to your affirmation. Don’t accuse him of changing the subject.

72. 72
keiths says:

cantor,

In that comment you were affirming Springer’s decision not to publish the proceedings. Granville’s post was replying to your affirmation. Don’t accuse him of changing the subject.

He did try to change the subject, from his own paper to those written by Jonathan Wells:

Elizabeth and KeithS,

In my opinion, two of the best talks at Cornell were given by Jon Wells, one about the myth of junk DNA (“Not junk after all…”), the other about epigenetics (“The membrane code…”). Why don’t you download these and see if you think Springer was justified in canning their publication. His points in these talks are being spectacularly confirmed by recent discoveries. (See chapter 14, and pp400-2 of “Darwin’s Doubt” for example.) Do you really think these were inferior papers?

73. 73
keiths says:

cantor,

She may have “accepted the correction”, but she didn’t fix her argument.

What are you asking her to do? She doesn’t have the power to edit her past comments.

Also, I still don’t know what your other two links are supposed to mean.

Try expressing it in this form:

“I agree/disagree with X, because Y. Instead, I think Z.”

Where

X is a description of the idea you agree or disagree with,
Y is your reason for agreeing or disagreeing,
and Z is your own view of the issue.

That would help considerably.

74. 74
cantor says:

`He did try to change the subject, from his own paper to those written by Jonathan Wells.`

The subject, which you brought into this thread and which he was addressing, was the Springer decision not to publish the proceedings. To support his argument, he mentioned the other papers.

It amazes me how deferentially you treat Liddle and how disrespectful you are to Sewell. It is most unbecoming.

75. 75
cantor says:

`What are you asking her to do? She doesn’t have the power to edit her past comments.`

She had the power to post “Thanks, keiths”.

`Also, I still don’t know what your other two links are supposed to mean.`

Here’s the problem, Keith. These three statements seem to be mutually exclusive:

1) When something does work its entropy increases (Liddle)

2) The sun does work (Liddle)

3) The entropy of the sun is decreasing (KeithS)

Liddle cannot logically affirm the third statement with a “Thanks, keiths” without clarifying or correcting one or both of the first two statements.

NB: In my original post I quoted the statements verbatim. I paraphrased the statements to help KeithS understand, at his request, the point I was making. If the paraphrases do not reflect the author’s intended meaning, now would be a good time to clarify what was intended.

76. 76
kairosfocus says:

F/N: First, it should be noted that the Clausius equation that quantifies entropy and its increase in an isolated system is based on how the heat transferring body A loses entropy and the heat gaining one B, INCREASES entropy.

The increase exceeds the decrease algebraically because of the ratio d’q/T is bigger for B as T is lower.

The physical meaning involved is that for B at the lower temp, the number of ways energy and mass at micro level can be arranged consistent with the macrostate so far increases above the corresponding decrease of A, that the net result for A and B is an increase.

Which brings us to the point that raw heat addition to a system tends to INCREASE its entropy. To instead get work, there is a need for a coupling mechanism that performs physical work and as a rule exhausts waste heat to a yet colder body C.

When that mechanism exhibits FSCO/I it is of course per that sign credibly designed. (This gets us back into the issues on searching large config spaces.)

The second point is of course that the concepts of increasing ways mass and energy can be arranged is exactly a way of saying that we are dealing with configuration spaces (and more generally phase spaces) and that when the specific arrangement of a body at micro level is less constrained its entropy has risen.

The sorts of narratives being told by evo mat advocates first typically fail to have empirical warrant — show us the spontaneous origin of a self replicating entity, much less a code using one, and too often refuse to adequately address the issue of config states and clusters of states thus the tendency to move to the bulk cluster.

And BTW one way to define entropy is in terms of average missing information to specify micro state given the info in the macrostate specified by lab observable conditions such as P, V, T etc.

Prof Sewell has a serious point but we are not dealing here with those inclined to acknowledge anything that does not sit comfortably with their ideology.

I suggest a read here early in the UD ID foundations series. There is also further discussion in my always linked note. TMLO by Thaxton et al is also worth reading.

But I am not fooling myself that ideologues who cannot acknowledge the obvious and the mathematically evident about a 500 H coin flip result, will acknowledge something like this.

To hope for that is a waste of time and energy.

KF

77. 77

Elizabeth @32:

The 2nd Law is entirely irrelevant to the question as to whether life could form spontaneously.

Well, certainly the open/closed nature of the system is essentially irrelevant. As is the alleged compensation “explanation” that is often offered by evolutionists (who, ironically, must be operating under the assumption that without such compensation there would be an issue).

1. So the 2nd Law relates only to heat distribution, in your view?

2. And are there any metabolic or heat transfer or similar biological processes in living systems about which the 2nd Law might have something to say?

3. If the 2nd Law applies only to heat distribution, is there any similar principle at work with, say, functional mechanical structure or information?

78. 78
keiths says:

cantor,

Granville opened this thread for the purpose of talking about his paper. How would talking about Jonathan Wells’ papers not be a change of subject?

It amazes me how deferentially you treat Liddle and how disrespectful you are to Sewell.

If criticizing Granville’s paper amounts to disrespect in your eyes, then call me disrespectful. My inclination is not to harp on it, but people here (including Granville himself) won’t let the subject drop!

79. 79
cantor says:

```KeithS @78: Granville opened this thread for the purpose of talking about his paper.```

That’s correct. But you weren’t content with limiting the discussion to the technical content of his paper.

You couldn’t resist bringing up the Springer affair. You derailed the conversation in that direction, and then gratuitously insulted Granville when he responded to what you said.

`How would talking about Jonathan Wells’ papers not be a change of subject?`

How would you bringing up the Springer affair not be a change of subject? And if not, how would responding to it then be a change of subject?

80. 80
keiths says:

Eric Anderson:

Well, certainly the open/closed nature of the system is essentially irrelevant. As is the alleged compensation “explanation” that is often offered by evolutionists (who, ironically, must be operating under the assumption that without such compensation there would be an issue).

Eric,

The reason evolutionists talk about compensation is that IDers and creationists keep making bogus second law arguments!

81. 81
cantor says:

`keiths @78: If criticizing Granville’s paper amounts to disrespect in your eyes...`

Baloney. You know full well what I am talking about.

82. 82
Bilbo I says:

Arthur Hunt once described the action inside a cell as a hurricane, only much faster. Yet it resulted in the construction of multi-protein complexes. I imagine if parts of houses were attracted to each other, and could withstand the forces of a tornado throwing them together, a tornado could build a house.

Interesting: I’m logged in as Bilbo I, but it shows that this comment will posted by “Anonymous.” I wonder why.

83. 83
Bilbo I says:

Oh good, posted as Bilbo I.

84. 84
CS3 says:

Hopefully a few citations and comments will help clarify that it is not Sewell who is the one who is confused on this issue.

From University Physics by Young and Freedman (one of the most widely used calculus-based general physics textbooks), in a section entitled “Microscopic Interpretation of Entropy” in the chapter “The Second Law of Thermodynamics”:

Entropy is a measure of the disorder of the system as a whole. To see how to calculate entropy microscopically, we first have to introduce the idea of macroscopic and microscopic states.

Suppose you toss N identical coins on the floor, and half of them show heads and half show tails. This is a description of the large-scale or macroscopic state of the system of N coins. A description of the microscopic state of the system includes information about each individual coin: Coin 1 was heads, coin 2 was tails, coin 3 was tails, and so on. There can be many microscopic states that correspond to the same macroscopic description. For instance, with N=4 coins there are six possible states in which half are heads and half are tails. The number of microscopic states grows rapidly with increasing N; for N=100 there are 2^100 = 1.27×10^30 microscopic states, of which 1.01×10^29 are half heads and half tails.

The least probable outcomes of the coin toss are the states that are either all heads or all tails. It is certainly possible that you could throw 100 heads in a row, but don’t bet on it: the possibility of doing this is only 1 in 1.27×10^30. The most probable outcome of tossing N coins is that half are heads and half are tails. The reason is that this macroscopic state has the greatest number of corresponding microscopic states.

To make the connection to the concept of entropy, note that N coins that are all heads constitutes a completely ordered macroscopic state: the description “all heads” completely specifies the state of each one of the N coins. The same is true if the coins are all tails. But the macroscopic description “half heads, half tails” by itself tells you very little about the state (heads or tails) of each individual coin. We say that the system is disordered because we know so little about its microscopic state. Compared to the state “all heads” or “all tails”, the state “half heads, half tails” has a much greater number of possible microstates, much greater disorder, and hence much greater entropy (which is a quantitative measure of disorder).

Now instead of N coins, consider a mole of an ideal gas containing Avogadro’s number of molecules. The macroscopic state of this gas is given by its pressure p, volume V, and temperature T; a description of the microscopic state involves stating the position and velocity for each molecule in the gas. At a given pressure, volume, and temperature the gas may be in any one of an astronomically large number of microscopic states, depending on the positions and velocities of its 6.02×10^23 molecules. If the gas undergoes a free expansion into a greater volume, the range of possible positions increases, as does the number of possible microscopic states. The system becomes more disordered, and the entropy increases.

We can draw the following general conclusion: For any system the most probable macroscopic state is the one with the greatest number of corresponding microscopic states, which is also the macroscopic state with the greatest disorder and the greatest entropy.

Sewell’s statement follows directly from this: in an isolated system, the reason natural forces (such as tornados) “may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy.”

In an open system, of course, entropy can decrease. To use Liddle’s example from another thread, a drop of water in an open system can release heat and form a snowflake. The entropy of the system has decreased, but the entropy of the surroundings and the overall universe has increased. However, that does not mean that the water molecules can assume just any more ordered (more improbable) state; they are still subject to the four fundamental forces, and will do what those predict they will do in that state. They will not, for example, assume the shape of a detailed reproduction of Abraham Lincoln’s face. Similarly, the Earth is an open system, so there can be local entropy decreases – but what happens is still subject to the four fundamental forces. Evolution is not a force that moves particles; it is just a description of a process, which still must ultimately be the result of the four fundamental forces acting on particles. In the case of going from a barren planet to one filled with human brains, computers, and encyclopedias, the scale and type of the increase in order (including CSI) is far greater and far different than in the case of the snowflake.

Nevertheless, as Sewell as always allowed, “it is not always easy to say what the Second Law predicts”, so nothing stops you from arguing that, just as snowflakes are what the actions of the gravitation, electromagnetic, and strong and weak nuclear forces predict will form from a drop of water when entering the lower entropy state, maybe houses are what the four unintelligent fundamental forces predict will form from a pile of rubble when entering a lower entropy state (in a process called “reverse tornado”), and maybe human brains, computers, and encyclopedias are what the four fundamental forces predict will form from a barren planet when sunlight enters it (in a process called “Darwinian evolution”). (Well, maybe some of the work by Dembski and others, if correct, will stop you from arguing that the unintelligent forces created the CSI, but that is beyond the scope of Sewell’s argument here.)

In the realm of life, you can find much more spectacular local increases in order in open systems, perhaps even on a par with reverse tornados turning rubble into houses (though still not developing new CSI). For example, in the oft-cited plant example, in an open system receiving sunlight, a plant can convert soil and water into a beautiful flower without violating the Second Law, because that actually is what the four fundamental forces predict will happen in this case – but only because there exists within the plant an extremely elegant mechanism to achieve this. Using examples from life, however, is decidedly “cheating” when discussing ID, because the whole point of ID is the claim that life is designed. Thus, a flowering plant is not an example of what the four unintelligent forces alone can do, according to ID; rather, it is an example of how a well-designed system can be engineered to achieve impressive local increases of some type of order without itself violating the Second Law in any way. Restricting ourselves to only the abiotic world (and also excluding creations of human intelligence), where it is agreed the four unintelligent forces are operating unaided, there may be some examples mildly more interesting than the snowflake, but obviously nothing remotely similar in scale or type to a reverse tornado constructing houses from rubble or Darwinian evolution constructing human brains, computers, and encyclopedias from a barren planet.

While at least some on here seem to have no qualms with making this argument that a planet full of spaceships and the Internet is not an improbable result of the actions of the gravitational, electromagnetic, and strong and weak nuclear forces acting unaided on a sunlit barren planet, many scientists realized the absurdity of trying to make this argument, and thus developed the compensation argument to avoid having to argue that is not improbable. According to the compensation argument, all the Second Law requires is that the overall entropy of the universe must increase, so we can say absolutely nothing about what can and cannot happen within an open system, just so long as the entropy decrease due to any more improbable arrangement achieved in the system is offset by an equal or greater increase of some other entropy. Then, you can say, sure, human brains, computers, and encyclopedias are an extremely improbable result of the actions of four unintelligent forces, but the increase in entropy due to solar influx is even greater, so there is no problem with the Second Law.

This illogical argument is what Sewell refutes in his paper. The laws of probability are not suspended when a system is opened; you just have to then consider what is entering and leaving the system when deciding what is or is not improbable. What enters or leaves the system must be causally related to the event that was extremely improbable when the system was isolated in order for it not to be improbable when the system is open; it cannot merely “compensate” for it in some global accounting scheme.

As a postscript, below are some more quotes from scientists who are apparently just as “confused” as Sewell, thinking the Second Law is related to probability and not just heat energy, followed by some more quotes stating the compensation argument:

From University Physics by Young and Freedman:

There is a relationship between the direction of a process and the disorder or randomness of the resulting state. For example, imagine a tedious sorting job, such as alphabetizing a thousand book titles written on file cards. Throw the alphabetized stack of cards into the air. Do they come down in alphabetical order? No, their tendency is to come down in a random or disordered state. In the free expansion of a gas, the air is more disordered after it has expanded into the entire box than when it was confined in one side, just as your clothes are more disordered when scattered all over your floor than when confined to your closet.

From a different edition of University Physics, in a section about “building physical intuition” about the Second Law:

A new deck of playing cards is sorted out by suit (hearts, diamonds, clubs, spades) and by number. Shuffling a deck of cards increases its disorder into a random arrangement. Shuffling a deck of cards back into its original order is highly unlikely.

From Basic Physics by Kenneth Ford:

Imagine a motion picture of any scene of ordinary life run backward. You might watch…a pair of mangled automobiles undergoing instantaneous repair as they back apart. Or a dead rabbit rising to scamper backward into the woods as a crushed bullet re-forms and flies backward into a rifle while some gunpowder is miraculously manufactured out of hot gas. Or something as simple as a cup of coffee on a table gradually becoming warmer as it draws heat from its cooler surroundings. All of these backward-in-time views and a myriad more that you can quickly think of are ludicrous and impossible for one reason only – they violate the second law of thermodynamics. In the actual scene of events, entropy is increasing. In the time reversed view, entropy is decreasing.

From General Chemistry, 5th Edition, by Whitten, Davis, and Peck:

The Second Law of Thermodynamics is based on our experiences. Some examples illustrate this law in the macroscopic world. When a mirror is dropped, it can shatter…The reverse of any spontaneous change is nonspontaneous, because if it did occur, the universe would tend toward a state of greater order. This is contrary to our experience. We would be very surprised if we dropped some pieces of silvered glass on the floor and a mirror spontaneously assembled.

Then, discussing a figure showing an isolated system consisting of two bulbs connected by an open stopcock containing molecules of two gasses (one red and one blue):

The ideas of entropy, order, and disorder are related to probability. The more ways an event can happen, the more probable that event is. In Figure 15-10b (showing both red and blue molecules randomly mixed in both bulbs) each individual red molecule is equally likely to be in either container, as is each individual blue molecule. As a result, there are many ways in which the mixed arrangement of Figure 15-10b can occur, so the probability of its occurrence is high, and so its entropy is high. In contrast, there is only one way the unmixed arrangement in Figure 15-10a (showing all red molecules in one bulb and all blue molecules in the other bulb) can occur. The resulting probability is extremely low, and the entropy of this arrangement is low.

From Isaac Asimov in “In the game of energy and thermodynamics, you can’t even break even”:

We have to work hard to straighten a room, but left to itself, it becomes a mess again very quickly and very easily…. How difficult to maintain houses, and machinery, and our own bodies in perfect working order; how easy to let them deteriorate. In fact, all we have to do is nothing, and everything deteriorates, collapses, breaks down, wears out — all by itself — and that is what the second law is all about.

From Asimov again:

You can argue, of course, that the phenomenon of life may be an exception [to the second law]. Life on earth has steadily grown more complex, more versatile, more elaborate, more orderly, over the billions of years of the planet’s existence. From no life at all, living molecules were developed, then living cells, then living conglomerates of cells, worms, vertebrates, mammals, finally Man. And in Man is a three-pound brain which, as far as we know, is the most complex and orderly arrangement of matter in the universe. How could the human brain develop out of the primeval slime? How could that vast increase in order (and therefore that vast decrease in entropy) have taken place?

I am pretty sure Asimov is not trying to say here, “wow, life, and especially human brains, sure burn well on my barbeque”.

Asimov then makes the compensation argument:

Remove the sun, and the human brain would not have developed. … And in the billions of years that it took for the human brain to develop, the increase in entropy that took place in the sun was far greater; far, far greater than the decrease that is represented by the evolution required to develop the human brain.

As does Peter Urone in College Physics:

Some people misuse the second law of thermodynamics, stated in terms of entropy, to say that the existence and evolution of life violate the law and thus require divine intervention. … It is true that the evolution of life from inert matter to its present forms represents a large decrease in entropy for living systems. But it is always possible for the entropy of one part of the universe to decrease, provided the total change in entropy of the universe increases.

And Angrist and Help in Order and Chaos:

In a certain sense the development of civilization may appear contradictory to the second law. … Even though society can effect local reductions in entropy, the general and universal trend of entropy increase easily swamps the anomalous but important efforts of civilized man. Each localized, man-made or machine-made entropy decrease is accompanied by a greater increase in entropy of the surroundings, thereby maintaining the required increase in total entropy.

85. 85
keiths says:

Hi, Bilbo!

Long, long time no see!

86. 86
CS3 says:

One more closing comment, although I admit it is a little snarky 🙂

For the Darwinists arguing that the Second Law is only about heat energy and not about probability, I think your view of the Second Law is about 150 years out of date (when the first formulations were all just about heat). However, your view of biology is also about 150 years out of date (when Darwin formulated his theory, unaware of the complex molecular machinery within the cell), so I guess at least you are consistent in that regard. 🙂

87. 87
Bilbo I says:

Hi Keith,

Do we know each other from way back at ARN?

88. 88
keiths says:

No, the early, early days of UD.

89. 89
Bilbo I says:

OK, that’s almost as long ago.

90. 90
Andre says:

don’t forget the role mass plays in entropy…. especially in an open system that is losing mass faster than it is accumulating it…..

91. 91
Bilbo I says:

Andre, is it really true that Earth is losing mass faster than it is gaining it? Is this a constant thing or does it vary?

92. 92

keiths @21:

Instead, the compensation happens because Earth is radiating energy out into its surroundings.

How do the surroundings “know” that they should increase their entropy? Because they receive the radiation from the Earth.

Look, the only reason the sun came up is because a number of evolution apologists have said, when confronted with questions, “But the Earth is an open system; it receives energy from the Sun and the Sun’s entropy is increasing to compensate for what happens on Earth.”

I’m glad to know that you agree the entropy situation on the Sun is irrelevant. That is good. Would that everyone would acknowledge the whole “Earth is an open system” is nonsense.

Now, however, we’re supposed to believe that because “Earth is radiating energy out into its surroundings”* that the compensation happens from the Earth’s radiation?

I’m laughing even thinking about this, but since you’re the expert, I’d like to understand what it is that you think is going on.

What is the physical mechanism you think is compensating for, in the example at hand, the growth of a tree? What is the initial trigger for this mechanism? How does the amount of compensation get adjusted for the decrease in entropy brought about by our tree?

What kind of a physical system is being proposed here?

—–

* Note, everyone, this is exactly what the Sun is doing. Thus, there is no rational basis for distinction between the Earth’s radiation and the Sun’s radiation, but we’ll play along.

93. 93

Incidentally, for keiths, Elizabeth, and anyone else who keeps incorrectly claiming that the thermodynamic issue is (i) irrelevant, and (ii) is only on the table because of creationist talking points, it is worth revisiting a thread from last year in which Nick Matzke sent us down a rabbit hole on the idea of life being some kind of “kinetic state,” based on a published paper he was rather enamored with.

That “kinetic state” is a separate issue, but the reason I bring it up here is that the authors — devout evolutionists to be sure — acknowledged that they were trying to deal with the thermodynamic issues relating to the origin and maintenance of life. Indeed, the whole reason they put forth their “kinetic state” of life argument was to try and solve these issues.

Yes, Virginia, they are talking about the same kind of thing Granville is talking about. They refer specifically to thermodynamics and the problem of “far-from-equilibrium-systems.”

Their “kinetic state” solution turned out to be nonsense in its own right and is a topic for another time, but the key takeaway for this thread, is that the authors of that paper acknowledged the thermodynamic/equilibrium issue as a live problem for evolutionary research.

http://www.uncommondescent.com.....ent-421718

94. 94
bornagain77 says:

I’ve always found the compensation (open system) argument from atheists to be a very disingenuous argument since the second law was formulated right here on earth, an open system, in the first place! ,,, And even though the harmful energy coming from the sun is very constrained as to allow only that energy which is most useful to life to reach the earth,,,

Extreme Fine Tuning of Light for Life and Scientific Discovery – video
http://www.metacafe.com/w/7715887

Fine Tuning Of Universal Constants, Particularly Light – Walter Bradley – video
http://www.metacafe.com/watch/4491552

Fine Tuning Of Light to the Atmosphere, to Biological Life, and to Water – graphs

,,, and even though the energy coming from the sun is very constrained in such a way,, In the following video,,,

Evolution Vs. Thermodynamics – Open System Refutation – Thomas Kindell – video
http://www.metacafe.com/watch/4143014

,,,Dr. Thomas Kindell points out that the harmful raw energy from the sun that is allowed to reach the earth must be further refined and converted into useful energy by photosynthesis,,, and indeed, contrary to evolutionary expectations, we now have evidence for photosynthetic life suddenly appearing on earth, as soon as water appeared on the earth, in the oldest sedimentary rocks ever found on earth.

The Sudden Appearance Of Photosynthetic Life On Earth – video
http://www.metacafe.com/watch/4262918

U-rich Archaean sea-floor sediments from Greenland – indications of +3700 Ma oxygenic photosynthesis (2003)

,,,yet photosynthesis is a very, very, complex process which is certainly not conducive to an easy materialistic explanation,,,

“There is no question about photosynthesis being Irreducibly Complex. But it’s worse than that from an evolutionary perspective. There are 17 enzymes alone involved in the synthesis of chlorophyll. Are we to believe that all intermediates had selective value? Not when some of them form triplet states that have the same effect as free radicals like O2. In addition if chlorophyll evolved before antenna proteins, whose function is to bind chlorophyll, then chlorophyll would be toxic to cells. Yet the binding function explains the selective value of antenna proteins. Why would such proteins evolve prior to chlorophyll? and if they did not, how would cells survive chlorophyll until they did?” Uncommon Descent Blogger

Evolutionary biology: Out of thin air John F. Allen & William Martin:
The measure of the problem is here: “Oxygenetic photosynthesis involves about 100 proteins that are highly ordered within the photosynthetic membranes of the cell.”
http://www.nature.com/nature/j.....5610a.html

The Miracle Of Photosynthesis – electron transport – video

Michael Denton: Remarkable Coincidences in Photosynthesis – podcast
http://www.idthefuture.com/201....._coin.html

95. 95
bornagain77 says:

In fact there is a irreducibly complex molecular machine at the heart of photosynthesis:

The ATP Synthase Enzyme – exquisite motor necessary for first life – video

ATP Synthase, an Energy-Generating Rotary Motor Engine – Jonathan M. May 15, 2013
Excerpt: ATP synthase has been described as “a splendid molecular machine,” and “one of the most beautiful” of “all enzymes” .,, “bona fide rotary dynamo machine”,,,
If such a unique and brilliantly engineered nanomachine bears such a strong resemblance to the engineering of manmade hydroelectric generators, and yet so impressively outperforms the best human technology in terms of speed and efficiency, one is led unsurprisingly to the conclusion that such a machine itself is best explained by intelligent design.
http://www.evolutionnews.org/2.....72101.html

Thermodynamic efficiency and mechanochemical coupling of F1-ATPase – 2011
Excerpt:F1-ATPase is a nanosized biological energy transducer working as part of FoF1-ATP synthase. Its rotary machinery transduces energy between chemical free energy and mechanical work and plays a central role in the cellular energy transduction by synthesizing most ATP in virtually all organisms.,,
Our results suggested a 100% free-energy transduction efficiency and a tight mechanochemical coupling of F1-ATPase.

Yet, photosynthesis presents a far more difficult challenge to Darwinists than just explaining how all these extremely complex mechanisms for converting raw energy into useful energy ‘just so happened’ to ‘randomly’ come about so as to enable life to be possible.,,,In what I find to be a very fascinating discovery, it is found that photosynthetic life, which is an absolutely vital link that all higher life on earth is dependent on for food, uses ‘non-local’, beyond space and time, quantum mechanical principles to accomplish photosynthesis

Quantum Mechanics at Work in Photosynthesis: Algae Familiar With These Processes for Nearly Two Billion Years – Feb. 2010
Excerpt: “We were astonished to find clear evidence of long-lived quantum mechanical states involved in moving the energy. Our result suggests that the energy of absorbed light resides in two places at once — a quantum superposition state, or coherence — and such a state lies at the heart of quantum mechanical theory.”,,, “It suggests that algae knew about quantum mechanics nearly two billion years before humans,” says Scholes.
http://www.sciencedaily.com/re.....131356.htm

At the 21:00 minute mark of the following video, Dr Suarez explains why photosynthesis needs a ‘non-local’, beyond space and time, cause to explain its effect:

Nonlocality of Photosynthesis – Antoine Suarez – video – 2012

As a Theist, I, of course, have a ‘non-local’ beyond space and time cause to appeal to to explain photosynthesis,,

Verse and Music:

1 John 1:5
This is the message we have heard from him and proclaim to you, that God is light, and in him is no darkness at all.

Toby Mac (In The Light) – music video

,,,Whereas the atheists have crickets chirping,,

Cricket Chirping

96. 96
kairosfocus says:

F/N a: I clip my always linked, here on in Section A:

____________

>>we may average the information per symbol in the communication system thusly (giving in terms of -H to make the additive relationships clearer):

– H = p1 log p1 + p2 log p2 + . . . + pn log pn

or, H = – SUM [pi log pi] . . . Eqn 5

H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: “it is often referred to as the entropy of the source.” [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] in its article on Informational Entropy (aka Shannon Information, cf also here):

At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann’s constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing.

But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, “Gain in entropy always means loss of information, and nothing more” . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).

Summarising Harry Robertson’s Statistical Thermophysics (Prentice-Hall International, 1993) — excerpting desperately and adding emphases and explanatory comments, we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.)

For, as he astutely observes on pp. vii – viii:

. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .

And, in more details, (pp. 3 – 6, 7, 36, cf Appendix 1 below for a more detailed development of thermodynamics issues and their tie-in with the inference to design; also see recent ArXiv papers by Duncan and Samura here and here):

. . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . .

[deriving informational entropy, cf. discussions here, here, here, here and here; also Sarfati’s discussion of debates and the issue of open systems here . . . ]

H({pi}) = – C [SUM over i] pi*ln pi, [. . . “my” Eqn 6]

[where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp – beta*yi) = Z [Z being in effect the partition function across microstates, the “Holy Grail” of statistical thermodynamics]. . . .

[H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . .

Jayne’s [summary rebuttal to a typical objection] is “. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.” . . . . [pp. 3 – 6, 7, 36; replacing Robertson’s use of S for Informational Entropy with the more standard H.]

As is discussed briefly in Appendix 1, Thaxton, Bradley and Olsen [TBO], following Brillouin et al, in the 1984 foundational work for the modern Design Theory, The Mystery of Life’s Origins [TMLO], exploit this information-entropy link, through the idea of moving from a random to a known microscopic configuration in the creation of the bio-functional polymers of life, and then — again following Brillouin — identify a quantitative information metric for the information of polymer molecules. For, in moving from a random to a functional molecule, we have in effect an objective, observable increment in information about the molecule. This leads to energy constraints, thence to a calculable concentration of such molecules in suggested, generously “plausible” primordial “soups.” In effect, so unfavourable is the resulting thermodynamic balance, that the concentrations of the individual functional molecules in such a prebiotic soup are arguably so small as to be negligibly different from zero on a planet-wide scale.

By many orders of magnitude, we don’t get to even one molecule each of the required polymers per planet, much less bringing them together in the required proximity for them to work together as the molecular machinery of life. The linked chapter gives the details. More modern analyses [e.g. Trevors and Abel, here and here], however, tend to speak directly in terms of information and probabilities rather than the more arcane world of classical and statistical thermodynamics, so let us now return to that focus; in particular addressing information in its functional sense, as the third step in this preliminary analysis . . . >>

____________

In short, heat, energy moving form one body to another by radiation, conduction or convection, is a particular and important case of a much wider phenomenon (and CS3’s set of clips is excellent).

More to follow . . .

KF

97. 97
kairosfocus says:

F/N b: Going on to Appendix I in the same always linked note:

____________

>>Let us reflect on a few remarks on the link from thermodynamics to information:

1] TMLO: In 1984, this well-received work provided the breakthrough critical review on the origin of life that led to the modern design school of thought in science. The three online chapters, as just linked, should be carefully read to understand why design thinkers think that the origin of FSCI in biology is a significant and unmet challenge to neo-darwinian thought. (Cf also Klyce’s relatively serious and balanced assessment, from a panspermia advocate. Sewell’s remarks here are also worth reading. So is Sarfati’s discussion of Dawkins’ Mt Improbable.)

2] But open systems can increase their order: This is the “standard” dismissal argument on thermodynamics, but it is both fallacious and often resorted to by those who should know better. My own note on why this argument should be abandoned is:

a] Clausius is the founder of the 2nd law, and the first standard example of an isolated system — one that allows neither energy nor matter to flow in or out — is instructive, given the “closed” subsystems [i.e. allowing energy to pass in or out] in it. Pardon the substitute for a real diagram, for now:

Isol System:

| | (A, at Thot) –> d’Q, heat –> (B, at T cold) | |

b] Now, we introduce entropy change dS >/= d’Q/T . . . “Eqn” A.1

c] So, dSa >/= -d’Q/Th, and dSb >/= +d’Q/Tc, where Th > Tc

d] That is, for system, dStot >/= dSa + dSb >/= 0, as Th > Tc . . . “Eqn” A.2

e] But, observe: the subsystems A and B are open to energy inflows and outflows, and the entropy of B RISES DUE TO THE IMPORTATION OF RAW ENERGY.

f] The key point is that when raw energy enters a body, it tends to make its entropy rise. This can be envisioned on a simple model of a gas-filled box with piston-ends at the left and the right:

=================================
||::::::::::::::::::::::::::::::::::::::::::||
||::::::::::::::::::::::::::::::::::::::::::||===
||::::::::::::::::::::::::::::::::::::::::::||
=================================

1: Consider a box as above, filled with tiny perfectly hard marbles [so collisions will be elastic], scattered similar to a raisin-filled Christmas pudding (pardon how the textual elements give the impression of a regular grid, think of them as scattered more or less hap-hazardly as would happen in a cake).

2: Now, let the marbles all be at rest to begin with.

3: Then, imagine that a layer of them up against the leftmost wall were given a sudden, quite, quite hard push to the right [the left and right ends are pistons].

4: Simply on Newtonian physics, the moving balls would begin to collide with the marbles to their right, and in this model perfectly elastically. So, as they hit, the other marbles would be set in motion in succession. A wave of motion would begin, rippling from left to right

5:As the glancing angles on collision will vary at random, the marbles hit and the original marbles would soon begin to bounce in all sorts of directions. Then, they would also deflect off the walls, bouncing back into the body of the box and other marbles, causing the motion to continue indefinitely.

6: Soon, the marbles will be continually moving in all sorts of directions, with varying speeds, forming what is called the Maxwell-Boltzmann distribution, a bell-shaped curve.

7: And, this pattern would emerge independent of the specific initial arrangement or how we impart motion to it, i.e. this is an attractor in the phase space: once the marbles are set in motion somehow, and move around and interact, they will soon enough settle into the M-B pattern. E.g. the same would happen if a small charge of explosive were set off in the middle of the box, pushing our the balls there into the rest, and so on. And once the M-B pattern sets in, it will strongly tend to continue. (That is, the process is ergodic.)

8: A pressure would be exerted on the walls of the box by the average force per unit area from collisions of marbles bouncing off the walls, and this would be increased by pushing in the left or right walls (which would do work to push in against the pressure, naturally increasing the speed of the marbles just like a ball has its speed increased when it is hit by a bat going the other way, whether cricket or baseball). Pressure rises, if volume goes down due to compression. (Also, volume of a gas body is not fixed.)

9: Temperature emerges as a measure of the average random kinetic energy of the marbles in any given direction, left, right, to us or away from us. Compressing the model gas does work on it, so the internal energy rises, as the average random kinetic energy per degree of freedom rises. Compression will tend to raise temperature. (We could actually deduce the classical — empirical — P, V, T gas laws [and variants] from this sort of model.)

10: Thus, from the implications of classical, Newtonian physics, we soon see the hard little marbles moving at random, and how that randomness gives rise to gas-like behaviour. It also shows how there is a natural tendency for systems to move from more orderly to more disorderly states, i.e. we see the outlines of the second law of thermodynamics.

11: Is the motion really random? First, we define randomness in the relevant sense:

In probability and statistics, a random process is a repeating process whose outcomes follow no describable deterministic pattern, but follow a probability distribution, such that the relative probability of the occurrence of each outcome can be approximated or calculated. For example, the rolling of a fair six-sided die in neutral conditions may be said to produce random results, because one cannot know, before a roll, what number will show up. However, the probability of rolling any one of the six rollable numbers can be calculated.

12: This can be seen by the extension of the thought experiment of imagining a large collection of more or less identically set up boxes, each given the same push at the same time, as closely as we can make it. At first, the marbles in the boxes will behave very much alike, but soon, they will begin to diverge as to path. The same overall pattern of M-B statistics will happen, but each box will soon be going its own way. That is, the distribution pattern is the same but the specific behaviour in each case will be dramatically different.

13: Q: Why?

14: A: This is because tiny, tiny differences between the boxes, and the differences in the vibrating atoms in the walls and pistons, as well as tiny irregularities too small to notice in the walls and pistons will make small differences in initial and intervening states — perfectly smooth boxes and pistons are an unattainable ideal. Since the system is extremely nonlinear, such small differences will be amplified, making the behaviour diverge as time unfolds. A chaotic system is not predictable in the long term. So, while we can deduce a probabilistic distribution, we cannot predict the behaviour in detail, across time. Laplace’s demon who hoped to predict the future of the universe from the covering laws and the initial conditions, is out of a job.

15: To see diffusion in action, imagine that at the beginning, the balls in the right half were red, and those in the left half were black. After a little while, as they bounce and move, the balls would naturally mix up, and it would be very unlikely indeed — through logically possible — for them to spontaneously un-mix, as the number of possible combinations of position, speed and direction where the balls are mixed up is vastly more than those where they are all red to the right, all alack to the left or something similar.

(This can be calculated, by breaking the box up into tiny little cells such that they would have at most one ball in them, and we can analyse each cell on occupancy, colour, location, speed and direction of motion. thus, we have defined a phase or state space, going beyond a mere configuration space that just looks at locations.)

16: So, from the orderly arrangement of laws and patterns of initial motion, we see how randomness emerges through the sensitive dependence of the behaviour on initial and intervening conditions. There would be no specific, traceable deterministic pattern that one could follow or predict for the behaviour of the marbles, through we could work out an overall statistical distribution, and could identify overall parameters such as volume, pressure and temperature.

17: For Osmosis, let us imagine that the balls are of different size, and that we have two neighbouring boxes with a porous wall between them; but only the smaller marbles can pass through the holes. If the smaller marbles were initially on say the left side, soon, they would begin to pass through to the right, until they were evenly distributed, so that on average as many small balls would pass left as were passing right, i.e., we see dynamic equilibrium. [this extends to evaporation and the vapour pressure of a liquid, once we add in that the balls have a short-range attraction that at even shorter ranges turns into a sharp repulsion, i.e they are hard.]

18: For a solid, imagine that the balls in the original box are now connected through springs in a cubical grid. The initial push will now set the balls to vibrating back and forth, and the same pattern of distributed vibrations will emerge, as one ball pulls on its neigbours in the 3-D array. (For a liquid, allow about 3% of holes in the grid, and let the balls slide over one another, making new connexions, some of them distorted. The fixed volume but inability to keep a shape that defines a liquid will emerge. The push on the liquid will have much the same effect as for the solid, except that it will also lead to flows.)

19: Randomness is thus credibly real, and naturally results from work on or energy injected into a body composed of microparticles, even in a classical Newtonian world; whether it is gas, solid or liquid. Raw injection of energy into a body tends to increase its disorder, and this is typically expressed in its temperature rising.

20: Quantum theory adds to the picture, but the above is enough to model a lot of what we see as we look at bulk and transport properties of collections of micro-particles.

21: Indeed, even viscosity comes out naturally, as . . . if there are are boxes stacked top and bottom that are sliding left or right relative to one another, and suddenly the intervening walls are removed, the gas-balls would tend to diffuse up and down from one stream tube to another, so their drift velocities will tend to even out, The slower moving stream tubes exert a dragging effect on the faster moving ones.

22: And many other phenomena can be similarly explained and applied, based on laws and processes that we can test and validate, and their consequences in simplified but relevant models of the real world.

23: When we see such a close match, especially when quantum principles are added in, it gives us high confidence that we are looking at a map of reality. Not the reality itself, but a useful map. And, that map tells us that thanks to sensitive dependence on initial conditions, randomness will be a natural part of the micro-world, and that when energy is added to a body its randomness tends to increase, i.e we see the principle of entropy, and why simply opening up a body to receive energy is not going to answer to the emergence of functional internal organisation.

24: For, organised states will be deeply isolated in the set of possible configurations. Indeed, if we put a measure of possible configurations in terms of say binary digits, bits, if we have 1,000 two-state elements there are already 1.07*10^301 possible configs. The whole observed universe searching at one state per Planck time, could not go through enough states of its 10^80 or so atoms, across its thermodynamically credible lifespan — about 50 mn times the 13.7 BY said to have elapsed form the big bang — to go through more than about 10^150 states. That is, the whole cosmos could not search more than a negligible fraction of the space. The hay stack could be positively riddled with needles, but at that rate we have not had any serious search at all.

25: That is, there is a dominant distribution, not a detailed plan a la Laplace’s (finite) Demon who could predict the long term path of the world on its initial conditions and sufficient calculating power and time.

26: But equally, since short term interventions that are subtle can have significant effects, there is room for the intelligent and sophisticated intervention; e.g. through a Maxwell’s Demon who can spot faster moving and slower moving molecules and open/shut a shutter to set one side hotter and the other colder in a partitioned box. Providing he has to take active steps to learn which molecules are moving faster/slower in the desired direction, Brillouin showed that he will be within the second law of thermodynamics.

. . . So, plainly, for the injection of energy to instead do predictably and consistently do something useful, it needs to be coupled to an energy conversion device.

g] When such energy conversion devices, as in the cell, exhibit FSCI, the question of their origin becomes material, and in that context, their spontaneous origin is strictly logically possible but — from the above — negligibly different from zero probability on the gamut of the observed cosmos. (And, kindly note: the cell is an energy importer with an internal energy converter. That is, the appropriate entity in the model is B and onward B’ below. Presumably as well, the prebiotic soup would have been energy importing, and so materialistic chemical evolutionary scenarios therefore have the challenge to credibly account for the origin of the FSCI-rich energy converting mechanisms in the cell relative to Monod’s “chance + necessity” [cf also Plato’s remarks] only.)

h] Now, as just mentioned, certain bodies have in them energy conversion devices: they COUPLE input energy to subsystems that harvest some of the energy to do work, exhausting sufficient waste energy to a heat sink that the overall entropy of the system is increased. Illustratively, for heat engines — and (in light of exchanges with email correspondents circa March 2008) let us note: a good slice of classical thermodynamics arose in the context of studying, idealising and generalising from steam engines [which exhibit organised, functional complexity, i.e FSCI; they are of course artifacts of intelligent design and also exhibit step-by-step problem-solving processes (even including “do-always” looping!)]:

| | (A, heat source: Th): d’Qi –> (B’, heat engine, Te): –>

d’W [work done on say D] + d’Qo –> (C, sink at Tc) | |

i] A’s entropy: dSa >/= – d’Qi/Th

j] C’s entropy: dSc >/= + d’Qo/Tc

k] The rise in entropy in B, C and in the object on which the work is done, D, say, compensates for that lost from A. The second law — unsurprisingly, given the studies on steam engines that lie at its roots — holds for heat engines.

l] However for B since it now couples energy into work and exhausts waste heat, does not necessarily undergo a rise in entropy having imported d’Qi. [The problem is to explain the origin of the heat engine — or more generally, energy converter — that does this, if it exhibits FSCI.]

m] There is also a material difference between the sort of heat engine [an instance of the energy conversion device mentioned] that forms spontaneously as in a hurricane [directly driven by boundary conditions in a convective system on the planetary scale, i.e. an example of order], and the sort of complex, organised, algorithm-implementing energy conversion device found in living cells [the DNA-RNA-Ribosome-Enzyme system, which exhibits massive FSCI].

n] In short, the decisive problem is the [im]plausibility of the ORIGIN of such a FSCI-based energy converter through causal mechanisms traceable only to chance conditions and undirected [non-purposive] natural forces. This problem yields a conundrum for chem evo scenarios, such that inference to agency as the probable cause of such FSCI — on the direct import of the many cases where we do directly know the causal story of FSCI — becomes the better explanation. As TBO say, in bridging from a survey of the basic thermodynamics of living systems in CH 7, to that more focussed discussion in ch’s 8 – 9:

While the maintenance of living systems is easily rationalized in terms of thermodynamics, the origin of such living systems is quite another matter. Though the earth is open to energy flow from the sun, the means of converting this energy into the necessary work to build up living systems from simple precursors remains at present unspecified (see equation 7-17). The “evolution” from biomonomers of to fully functioning cells is the issue. Can one make the incredible jump in energy and organization from raw material and raw energy, apart from some means of directing the energy flow through the system? In Chapters 8 and 9 we will consider this question, limiting our discussion to two small but crucial steps in the proposed evolutionary scheme namely, the formation of protein and DNA from their precursors.

It is widely agreed that both protein and DNA are essential for living systems and indispensable components of every living cell today.11 Yet they are only produced by living cells. Both types of molecules are much more energy and information rich than the biomonomers from which they form. Can one reasonably predict their occurrence given the necessary biomonomers and an energy source? Has this been verified experimentally? These questions will be considered . . . [Bold emphasis added. Cf summary in the peer-reviewed journal of the American Scientific Affiliation, “Thermodynamics and the Origin of Life,” in Perspectives on Science and Christian Faith 40 (June 1988): 72-83, pardon the poor quality of the scan. NB:as the journal’s online issues will show, this is not necessarily a “friendly audience.”]

3] So far we have worked out of a more or less classical view of the subject. But, to explore such a question further, we need to look more deeply at the microscopic level. Happily, there is a link from macroscopic thermodynamic concepts to the microscopic, molecular view of matter, as worked out by Boltzmann and others, leading to the key equation:

s = k ln W . . . Eqn.A.3

That is, entropy of a specified macrostate [in effect, macroscopic description or specification] is a constant times a log measure of the number of ways matter and energy can be distributed at the micro-level consistent with that state [i.e. the number of associated microstates; aka “the statistical weight of the macrostate,” aka “thermodynamic probability”]. The point is, that there are as a rule a great many ways for energy and matter to be arranged at micro level relative to a given observable macro-state. That is, there is a “loss of information” issue here on going from specific microstate to a macro-level description, with which many microstates may be equally compatible. Thence, we can see that if we do not know the microstates specifically enough, we have to more or less treat the micro-distributions of matter and energy as random, leading to acting as though they are disordered. Or, as Leon Brillouin, one of the foundational workers in modern information theory, put it in his 1962 Science and Information Theory, Second Edition:

How is it possible to formulate a scientific theory of information? The first requirement is to start from a precise definition. . . . . We consider a problem involving a certain number of possible answers, if we have no special information on the actual situation. When we happen to be in possession of some information on the problem, the number of possible answers is reduced, and complete information may even leave us with only one possible answer. Information is a function of the ratio of the number of possible answers before and after, and we choose a logarithmic law in order to insure additivity of the information contained in independent situations [as seen above in the main body, section A] . . . .

Physics enters the picture when we discover a remarkable likeness between information and entropy. This similarity was noticed long ago by L. Szilard, in an old paper of 1929, which was the forerunner of the present theory. In this paper, Szilard was really pioneering in the unknown territory which we are now exploring in all directions. He investigated the problem of Maxwell’s demon, and this is one of the important subjects discussed in this book. The connection between information and entropy was rediscovered by C. Shannon in a different class of problems, and we devote many chapters to this comparison. We prove that information must be considered as a negative term in the entropy of a system; in short, information is negentropy. The entropy of a physical system has often been described as a measure of randomness in the structure of the system. We can now state this result in a slightly different way:

Every physical system is incompletely defined. We only know the values of some macroscopic variables, and we are unable to specify the exact positions and velocities of all the molecules contained in a system. We have only scanty, partial information on the system, and most of the information on the detailed structure is missing. Entropy measures the lack of information; it gives us the total amount of missing information on the ultramicroscopic structure of the system.

This point of view is defined as the negentropy principle of information [added links: cf. explanation here and “onward” discussion here — noting on the brief, dismissive critique of Brillouin there, that you never get away from the need to provide information — there is “no free lunch,” as Dembski has pointed out ; ->) ], and it leads directly to a generalization of the second principle of thermodynamics, since entropy and information must, be discussed together and cannot be treated separately. This negentropy principle of information will be justified by a variety of examples ranging from theoretical physics to everyday life. The essential point is to show that any observation or experiment made on a physical system automatically results in an increase of the entropy of the laboratory. It is then possible to compare the loss of negentropy (increase of entropy) with the amount of information obtained. The efficiency of an experiment can be defined as the ratio of information obtained to the associated increase in entropy. This efficiency is always smaller than unity, according to the generalized Carnot principle. Examples show that the efficiency can be nearly unity in some special examples, but may also be extremely low in other cases.

This line of discussion is very useful in a comparison of fundamental experiments used in science, more particularly in physics. It leads to a new investigation of the efficiency of different methods of observation, as well as their accuracy and reliability . . . . [From an online excerpt of the Dover Reprint edition, here. Emphases, links and bracketed comment added.]

4] Yavorski and Pinski, in the textbook Physics, Vol I [MIR, USSR, 1974, pp. 279 ff.], summarise the key implication of the macro-state and micro-state view well: as we consider a simple model of diffusion, let us think of ten white and ten black balls in two rows in a container. There is of course but one way in which there are ten whites in the top row; the balls of any one colour being for our purposes identical. But on shuffling, there are 63,504 ways to arrange five each of black and white balls in the two rows, and 6-4 distributions may occur in two ways, each with 44,100 alternatives. So, if we for the moment see the set of balls as circulating among the various different possible arrangements at random, and spending about the same time in each possible state on average, the time the system spends in any given state will be proportionate to the relative number of ways that state may be achieved. Immediately, we see that the system will gravitate towards the cluster of more evenly distributed states. In short, we have just seen that there is a natural trend of change at random, towards the more thermodynamically probable macrostates, i.e the ones with higher statistical weights. So “[b]y comparing the [thermodynamic] probabilities of two states of a thermodynamic system, we can establish at once the direction of the process that is [spontaneously] feasible in the given system. It will correspond to a transition from a less probable to a more probable state.” [p. 284.] This is in effect the statistical form of the 2nd law of thermodynamics. Thus, too, the behaviour of the Clausius isolated system above is readily understood: importing d’Q of random molecular energy so far increases the number of ways energy can be distributed at micro-scale in B, that the resulting rise in B’s entropy swamps the fall in A’s entropy. Moreover, given that FSCI-rich micro-arrangements are relatively rare in the set of possible arrangements, we can also see why it is hard to account for the origin of such states by spontaneous processes in the scope of the observable universe. (Of course, since it is as a rule very inconvenient to work in terms of statistical weights of macrostates [i.e W], we instead move to entropy, through s = k ln W. Part of how this is done can be seen by imagining a system in which there are W ways accessible, and imagining a partition into parts 1 and 2. W = W1*W2, as for each arrangement in 1 all accessible arrangements in 2 are possible and vice versa, but it is far more convenient to have an additive measure, i.e we need to go to logs. The constant of proportionality, k, is the famous Boltzmann constant and is in effect the universal gas constant, R, on a per molecule basis, i.e we divide R by the Avogadro Number, NA, to get: k = R/NA. The two approaches to entropy, by Clausius, and Boltzmann, of course, correspond. In real-world systems of any significant scale, the relative statistical weights are usually so disproportionate, that the classical observation that entropy naturally tends to increase, is readily apparent.)

5] The above sort of thinking has also led to the rise of a school of thought in Physics — note, much spoken against in some quarters, but I think they clearly have a point — that ties information and thermodynamics together. Robertson presents their case; in summary:

. . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability should be seen as, in part, an index of ignorance] . . . .

[deriving informational entropy, cf. discussions here, here, here, here and here; also Sarfati’s discussion of debates and open systems here; the debate here is eye-opening on rhetorical tactics used to cloud this and related issues . . . ]

S({pi}) = – C [SUM over i] pi*ln pi, [. . . “my” Eqn A.4]

[where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp – beta*yi) = Z [Z being in effect the partition function across microstates, the “Holy Grail” of statistical thermodynamics]. . . .[pp.3 – 6]

S, called the information entropy, . . . correspond[s] to the thermodynamic entropy, with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context [p. 7] . . . .

Jayne’s [summary rebuttal to a typical objection] is “. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.” . . . . [p. 36.]

[Robertson, Statistical Thermophysics, Prentice Hall, 1993. (NB: Sorry for the math and the use of text for symbolism. However, it should be clear enough that Roberson first summarises how Shannon derived his informational entropy [though Robertson uses s rather than the usual H for that information theory variable, average information per symbol], then ties it to entropy in the thermodynamic sense using another relation that is tied to the Boltzmann relationship above. This context gives us a basis for looking at the issues that surface in prebiotic soup or similar models as we try to move from relatively easy to form monomers to the more energy- and information- rich, far more complex biofunctional molecules.)] >>

____________

In short the rebuttals presented that try to suggest that an open system poses no thermodynamics challenges to OOL and onward evolution as a consequence, is seriously flawed and simplistic.

KF

98. 98
kairosfocus says:

CS3, re #84: Well done. KF

99. 99

Eric:

1. So the 2nd Law relates only to heat distribution, in your view?

Ultimately, yes. That’s why it’s called the 2nd Law of Thermodynamics. But you need to read it in the context of the 1st Law of Thermodynamics, which is also the Law of Conservation of Energy:

the change in the internal energy of a closed system is equal to the amount of heat supplied to the system, minus the amount of work done by the system on its surroundings.

So when you lift a brick from the floor to high shelf, you are doing work, and your entropy increases. But you have decreased the local entropy of the brick. It doesn’t get “hotter”, but you are increasing its capacity to make the floor hotter when the shelf gives way and the brick falls.

So while the 2nd Law always boils down to thermodynamics, you can also think in terms of macroscopic systems: a low entropy system is one in which more work can be done, after which the entropy of the system increases, and less work can be done.

2. And are there any metabolic or heat transfer or similar biological processes in living systems about which the 2nd Law might have something to say?

Of course there are. And biological processes absolutely do not violate the 2nd Law.

3. If the 2nd Law applies only to heat distribution, is there any similar principle at work with, say, functional mechanical structure or information?

There is certainly an almost identical concept in information theory, i.e. Shannon entropy, but, interestingly, Shannon entropy and thermodynamic entropy are inverselycorrelated. The more thermodynamic entropy increase, the more Shannon entropy decreases.

And functional mechanical structure is also covered by the 2nd Law, as I keep saying. A tornado that lifts a sofa into a tree has increased the capacity of the sofa to do work, and so has reduced the entropy of the arrangement of furniture, by virtue of an increase in the entropy of the tornado.

Granville’s paper claims that designed things and biological things are evidence of a violation of the 2nd Law of thermodynamics. They are not. There is no violation of the 2nd Law. The fact that a tornado cannot build a house is not evidence that when a person builds a house the 2nd Law is violated. It isn’t.

If Granville wants to propose another Law that is violated when a person builds a house, or a biological organism develops, or evolves, then fine (Dembski has attempted to do so). But his argument from the 2nd Law of Thermodynamics is patently false because the 2nd Law is not violated by such things, whether you interpret it at the microscopic of macroscopic level. To quote the Flanders and Swann song I linked earlier:

The first law of thermodynamics
Heat is work and work is heat (x 2)
Very good

The second law of thermodynamics
Heat cannot of itself pass from one body to a hotter body (x 2)
Heat won’t pass from a cooler to a hotter (x 2)
You can try it if you like but you far better notter (x 2)
‘Cause the cold in the cooler will be hotter as a ruler (x 2)
Because the hotter body’s heat will pass through the cooler

Heat is work and work is heat
And work is heat and heat is work
Heat will pass by conduction (x 2)
And heat will pass by convection (x 2)
And heat will pass by radiation (x 2)
And that’s a physical law

Heat is work and work’s a curse
And all the heat in the universe
It’s gonna cool down as it can’t increase
Then there’ll be no more work
And they’ll be perfect peace
Really?
Yeah, that’s entropy, man!
And all because of the second law of thermodynamics, which lays down

That you can’t pass heat from the cooler to the hotter
Try it if you like but you far better notter
‘Cause the cold in the cooler will get hotter as a ruler
‘Cause the hotter body’s heat will pass through the cooler

Oh, you can’t pass heat from the cooler to the hotter
You can try it if you like but you far better notter
‘Cause the cold in the cooler will get hotter as a ruler
That’s the physical law

Ooh, I’m hot!

What? That’s because you’ve been working

Oh, Beatles? Nothing!

That’s the first and second law of thermodynamics

100. 100

Eric:

Look, the only reason the sun came up is because a number of evolution apologists have said, when confronted with questions, “But the Earth is an open system; it receives energy from the Sun and the Sun’s entropy is increasing to compensate for what happens on Earth.”

Confronted with what questions, Eric?

Are you still under the impression that local entropy increases on earth are violations of the 2nd Law?

I accept keiths’ argument that even if we consider the earth a closed system, the 2nd Law is not violated, because local increases in entropy on earth are gained at the cost of decreases in the surroundings.

Are you saying that you think that the 2nd Law IS violated on earth and that considering the earth as an open system doesn’t help?

In fact does anyone here still think that the 2nd Law is violated by the existence or the artefacts of biological organisms here on earth?

101. 101
steveh says:

Could someone help me out here by giving some examples of objects or processes designed and built by humans which violate the second law of thermodynamics?

102. 102
Gordon Davisson says:

From Collin #2:

Granville,

May I suggest an addition to your theory (forgive me if you’ve already said this before): “It is evident that entropy is not leaving the earth fast enough to compensate for the order developing on the earth.” This assertion would require some kind of measurement of entropy and order and applying it over the billions of years of earth’s existence. That would be difficult, but I wonder if a good estimate is possible.

Already done. A couple of years ago, I posted an analysis giving a lower bound of 3.3e14 J/K per second for the net entropy flux leaving Earth (that’s at least 3.7e14 J/K per second leaving Earth as thermal radiation, minus 3.83e13 J/K per second incoming due as sunlight). Note that this is a lower bound; the actual value will be somewhat higher (though it’s probably within a factor of two). If that rate has been constant over the 4.54 billion years Earth has been around (it hasn’t, but probably close enough), it comes to 4.7e31 J/K.

Emory Bunn’s paper “Evolution and the second law of thermodynamics” estimates the entropy decrease for life at 1e44 * k (where k is Boltzmann’s constant) = 1.4e21 J/K. If this is even vaguely close to correct, the Earth’s entropy flux is clearly large enough to allow for the evolution, growth, reproduction, etc of life.

I’m not knowledgeable enough about biochemistry to evaluate that part of Bunn’s calculation, but the only relevant critique I’ve seen (Sewell’s “Poker Entropy and the Theory of Compensation”, which doesn’t seem to be online anymore) is based entirely on the false claim that

According to Styer and Bunn, the Boltzmann formula, which relates the thermal entropy of an ideal gas state to the number of possible microstates, and thus to the probability of the state, can be used to compute the change in thermal entropy associated with any change in probability: not just the probability of an ideal gas state, but the probability of anything.

Since Bunn’s calculation doesn’t relate to probability at all, only the relation between the number of microstates (“multiplicity”) and entropy, and this relation does hold universally, the criticism is misdirected.

103. 103
Granville Sewell says:

Thanks CS3 (comment 84). Your summary of my arguments is better than my original!

104. 104

And cs3 is making exactly the same error!

Which is to equate “order” with “low entropy” and also with “low probability”.

Will one of you please say what you mean by a low probability state?

What possible meaning can that phrase have unless you also specify the generative process under which you are estimating that probability?

And once you have stated that, will you please explain why any process by which an otherwise low-probability state becomes probable is a violation of the 2nd Law of thermodynamics?

105. 105
Joe says:

Evidence Elizabeth- why is it that you cannot produce any evidence that refutes Dr Sewell, CS3, KF, Eric, et al.?

The fact that a tornado cannot build a house is not evidence that when a person builds a house the 2nd Law is violated.

LoL! No Lizzie, if nature, operating freely built the house THEN the 2LoT would be violated.

106. 106
cantor says:

There’s a small book and a deck of cards sitting on a table.

The deck is brand new (and sorted, as most new decks are).

I pick up the deck and shuffle it thoroughly, then place it on top of the book, so it is now 1″ above the table top.

Multiple choice: The entropy of the deck of cards is now:

1) lower

2) higher

3) the same

4) it depends on your definition of entropy

107. 107

The thermodynamic entropy is slightly lower, but yours is slighly higher.

The shannon entropy of the deck is identical.

108. 108

Would someone like to tell me what they mean by a “low probability state”?

Granville? cs3?

109. 109
cantor says:

Liddle@107:
The thermodynamic entropy is slightly lower…

The shannon entropy of the deck is identical.

.

The Young & Freedman entropy of the deck is higher.

The correct answer seems to be (4).

110. 110
cantor says:

Liddle@108:
Would someone like to tell me what they mean by a “low probability state”?

Young & Freedman quoted @84:

For any system the most probable macroscopic state is the one with the greatest number of corresponding microscopic states, which is also the macroscopic state with the greatest disorder and the greatest entropy.

111. 111

OK, good, thanks.

So which arrangement has the greatest number of corresponding microstates, a junkyard before a tornado, or a junkyard after a tornado?

Similarly, which arrangement has the greatest number of corresponding microstates, the parts of a self-build computer before you have assembled it, or the parts of a self-build computer after you have assembled it?

112. 112

cantor:

The Young & Freedman entropy of the deck is higher.

The correct answer seems to be (4).

That doesn’t appear to follow from your link. Where does it say anything that implies that a shuffled deck has more entropy (by any definition) than an ordered deck?

113. 113

Elizabeth @99:

So while the 2nd Law always boils down to thermodynamics, you can also think in terms of macroscopic systems: a low entropy system is one in which more work can be done, after which the entropy of the system increases, and less work can be done.

Good, so if we have a macroscopic biological system that can do more work than the same molecules randomly floating around in a dish, then we are dealing with a thermodynamic issue. And what is it that prevents those randomly floating molecules from coming together again to form a macroscopic system that can perform more work?

Of course there are [metabolic or heat transfer or similar biological processes in living systems about which the 2nd Law might have something to say]. And biological processes absolutely do not violate the 2nd Law.

Nobody ever said they did.

Granville’s paper claims that designed things and biological things are evidence of a violation of the 2nd Law of thermodynamics. They are not. There is no violation of the 2nd Law. The fact that a tornado cannot build a house is not evidence that when a person builds a house the 2nd Law is violated. It isn’t.

If Granville wants to propose another Law that is violated when a person builds a house, or a biological organism develops, or evolves, then fine (Dembski has attempted to do so). But his argument from the 2nd Law of Thermodynamics is patently false because the 2nd Law is not violated by such things, whether you interpret it at the microscopic of macroscopic level.

Sheesh, we must all be talking past each other because I’m wondering if you (and keiths) really believe Granville is arguing that the 2nd Law has been violated in biology, or whether you understand that he is arguing that it would be if certain things just came together spontaneously.

Look, suppose someone came along claiming to have invented a perpetual motion machine and you said they were crazy because such a thing would violate the laws of thermodynamics. Then a third party listening in started harping on your conclusion and repeatedly and publicly asserting to everyone that you had claimed the “laws of thermodynamics had been violated,” you’d probably get a little peeved.

Granville has never said the 2nd Law has been violated and — contra your silly statement — has certainly never said it is violated in building a house or something like that. What he is saying, as near as I can tell, is that if what evolutionists claim happened actually had happened, then it would be a violation of the broader principles underlying the 2nd law.

Everyone knows that intelligent beings can design and build things that harness energy and perform work in ways that would not naturally occur under the 2nd Law acting with the primary forces of nature. Granville is trying to say: “Everyone recognizes this in the case of human design. Why, when we see the same thing in living systems, does the inference just get tossed aside and we end up with silly assertions like ‘Well, the Earth is an open system’ and so on?”

He is saying: look, there is this thermodynamic issue to consider (which evolutionists like Pross acknowledge in the paper I linked to).

The response from evolutionists has been to claim, variously:
(i) thermodynamics is irrelevant to living systems [false],
(ii) thermodynamics only relates to heat dissipation [rational, but highly arguable],
(iii) the Earth is an open system and it gets compensated for somewhere else [silly and misses the whole point], or
(iv) there is a new concept (like “kinetic states”) that can resolve the thermodynamic issue and make the implausible plausible [absurd].

114. 114
cantor says:

Liddle@112:
That doesn’t appear to follow from your link. Where does it say anything that implies that a shuffled deck has more entropy (by any definition) than an ordered deck?

Seriously? If you can’t make the connection, substitute a shoebox with 100 coins lying on the bottom, all heads, for the new deck of cards. Shake the box instead of shuffling the deck. Put the box on top of the book.

Young&Freedman, quoted @84:
To make the connection to the concept of entropy, note that N coins that are all heads constitutes a completely ordered macroscopic state: the description “all heads” completely specifies the state of each one of the N coins. The same is true if the coins are all tails. But the macroscopic description “half heads, half tails” by itself tells you very little about the state (heads or tails) of each individual coin. We say that the system is disordered because we know so little about its microscopic state. Compared to the state “all heads” or “all tails”, the state “half heads, half tails” has a much greater number of possible microstates, much greater disorder, and hence much greater entropy (which is a quantitative measure of disorder).

115. 115
cantor says:

Liddle@111:

OK, good, thanks.

So which arrangement has the greatest number of corresponding microstates, a junkyard before a tornado, or a junkyard after a tornado?

Similarly, which arrangement has the greatest number of corresponding microstates, the parts of a self-build computer before you have assembled it, or the parts of a self-build computer after you have assembled it?

Congratulations. You’ve finally arrived.

That’s the argument you should have been making all along, instead of denying (or pretending) that the only known definition of entropy is the thermo/energy/work one, and berating Sewell for knowing that it’s not.

116. 116

Cantor:

Seriously? If you can’t make the connection, substitute a shoebox with 100 coins lying on the bottom, all heads, for the new deck of cards. Shake the box instead of shuffling the deck. Put the box on top of the book.

That is not a parallel example. If you have a box with coins lying all heads, jiggling the box is likely to give you a selection of heads and tails, and the more you jiggle, the more equal the proportions are likely to be

Jiggling will thus increase the Shannon entropy of the arrangement (and if the “coins” represented “particles” and “Heads” one energy state and “Tails” another, thermodynamic entropy too), because there are far more ways of arranging an equal proportion of Head and Tails than there are of arranging unequal proportions.

However, your example was of a deck of cards, shuffled. There are just as many ways of rearranging a shuffled deck as an unshuffled deck! So the Shannon entropy remains unchanged by the shuffle. On the other hand, if you were to somehow change all the suits to hearts, you would have reduced the Shannon entropy of the arrangement.

The thermodynamic entropy, however, is slightly increased when you raise the deck, because that deck can now, potentially, do work. You have added and stored energy, and with a small amount of additional energy, by giving it a nudge, you can make the table and air heat up a little when it falls back on to the table.

But neither Shannon entropy nor thermodynamic entropy have anything to do with “order” as in “the order in which the items are arranged”. They have to do with the number of possible ways in which the items can be arranged. 52 cards can be arranged 52! ways. 99 coins Heads Up and 1 Tails up can be rearranged 100 ways (without flipping them over), but 50 Heads up and 50 Tails up can be arranged 100!/((100-50)!*50!) ways.

Equally, the items of a junkyard arranged neatly in rows have the same amount of Shannon entropy as the same junkyard whether rearranged by engineers or by a tornado. They probably have the same amount of thermodynamic entropy too.

117. 117

Cantor

Congratulations. You’ve finally arrived.

That’s the argument you should have been making all along, instead of denying (or pretending) that the only known definition of entropy is the thermo/energy/work one, and berating Sewell for knowing that it’s not.

So why does Sewell even mention the 2nd Law of Thermodynamics, and energy, if that’s not what he’s talking about?

This is exactly what I’ve been saying all along! That the 2nd Law (and whether the Earth is an “open” or “closed” system) is utterly irrelevant to his argument, which is merely Dembski’s CSI argument, i.e. a probability/information argument.

Which has its problems indeed, but nothing to the problem of apparently claiming that evolution, if the cause of living things, would mean that the 2nd Law of Thermodynamics has been violated!

I am still unclear whether he thinks it’s fine if the violator is a Designer (or merely a designer), or whether he thinks that if an apparent violation is by a Designer (or designer) it isn’t a violation!

Or neither.

118. 118

“Which has its problems indeed, but nothing to the problem of apparently claiming that evolution, if the cause of living things, would mean that the 2nd Law of Thermodynamics has been violated!”

Laws are not violated by definition. No IDer says that the 2nd law is violated. The problem is another. It is indeed evolution (if it happened, but it didn’t happen) that would be a violation to the 2nd law. Because evolution and the 2nd law are incompatible. One goes right while the other goes left. One (2nd law) states that systems go towards probability, the other (evolution) absurdly pretends to go towards improbability. When will you stop trying to conciliate the incompatibles?

119. 119
keiths says:

Cantor and Lizzie,

One of the weirder aspects of Granville’s paper, which no one has yet mentioned, is his concept of “X-entropy.”

It’s not just that Granville confuses disorder (as in messy houses or tornado rubble) with thermodynamic disorder. He actually thinks that there are thousands of different kinds of entropy and thousands of instances of the second law:

However, if we define “X-entropy” to be the entropy associated with any diffusing component X (for example, X might be heat), and, since entropy measures disorder, “X-order” to be the negative of X-entropy, a closer look at the equations for entropy change shows that they not only say that the X order cannot increase in an isolated system, but that they also say that in a non-isolated system the X order cannot increase faster than it is imported through the boundary.

Think of a mixture of gaseous nitrogen, helium, and radon. To any physicist, there would be a single entropy associated with the mixture. To Granville, there are at least four entropies and four applicable second laws! The nitrogen entropy could be high while the heat entropy is low, and the other two are who knows what. Each of the four follows its own version of the second law.

It’s as if Granville is saying “When you go to the grocery, be sure to bring enough potato-money and milk-money, because you can’t buy potatoes with milk-money or milk with potato-money.”

What a confused mess. This paper is a train wreck, and I mean disordered, not high entropy. It amazes me that people are still trying to defend it.

120. 120
cantor says:

Liddle@116:
That is not a parallel example.

It is if you look at it the following way (which I believe is Sewell’s argument)… altering Young&Freedman thus:

Compared to the state “macroscopically describable order” (of the deck of cards), the state “no macroscopically describable order” has a much greater number of possible microstates, much greater disorder, and hence much greater entropy (which is a quantitative measure of disorder).

121. 121

Laws are not violated by definition. No IDer says that the 2nd law is violated. The problem is another. It is indeed evolution (if it happened, but it didn’t happen) that would be a violation to the 2nd law. Because evolution and the 2nd law are incompatible. One goes right while the other goes left. One (2nd law) states that systems go towards probability, the other (evolution) absurdly pretends to go towards improbability. When will you stop trying to conciliate the incompatibles?

Because a) Granville is simply wrong when he says the two are incompatible, and b) if evolution is incompatible with the 2nd Law (which it isn’t) why isn’t a Designer? Both (allegedly) cause otherwise improbable things to happen.

And talking of incompatibles, why are half Granville’s defenders (and half of Granville) saying that Granville isn’t actually talking about the 2nd Law of thermodynamics but some other kind of entropy that also has (or seems to have) a 2nd Law, and the other half of his defenders (and the other half of Granville) saying, but evolution if true would violate the 2nd Law of Thermodynamics?

It must be clear to pretty well everyone now that his argument is a mess (it seems also to becoming clear to Granville, which is to his credit).

122. 122
cantor says:

keiths@119:
It’s as if Granville is saying “When you go to the grocery, be sure to bring enough potato-money and milk-money, because you can’t buy potatoes with milk-money or milk with potato-money.”

What a confused mess. This paper is a train wreck

Can you please post something more useful than childish ridicule?

What is it about the concept of X-entropy that you find so contemptuous? Talk math and physics please, if you are able.

123. 123
Chesterton says:

“if evolution is incompatible with the 2nd Law (which it isn’t) why isn’t a Designer? Both (allegedly) cause otherwise improbable things to happen.”

Because a designer can “work” that is what you need to reduce entropy according with the 2nd Law.

124. 124

Cantor:

It is if you look at it the following way (which I believe is Sewell’s argument)… altering Young&Freedman thus:

Compared to the state “macroscopically describable order” (of the deck of cards), the state “no macroscopically describable order” has a much greater number of possible microstates, much greater disorder, and hence much greater entropy (which is a quantitative measure of disorder).

I think you have misunderstood Young & Freedman. The equivalent macroscopic description to “all heads” or “half heads half tails” for your cards might be “all ace of spades” versus “each card a different face”. If all your coins are heads, the number of way of arranging them is 1. If half are Tails, the number of ways is N!/(N-T)!*T!),where N is the number of coins and T is the number of Tails.

If your cards are 52 different cards, the number of ways of arranging them is 52!, regardless of how they are arranged when you find them. However, if they are all Ace of Spades, the number of ways is, like All Heads, 1.

So shuffling your deck makes no difference to the Shannon entropy or to the “thermodynamic entropy” in Freedman & Young’s metaphor. What matters for the amount of entropy is the number of possible arrangements, not the arrangement the macrostate happens to be in. The macrostate description is simply the proportions of each, not the order.

So which arrangement has the greatest number of corresponding microstates, a junkyard before a tornado, or a junkyard after a tornado?

And the answer is: they are both the same, as long as the inventory is unaltered. The arrangement doesn’t matter – the macroscopic state is simply the inventory.

Similarly, which arrangement has the greatest number of corresponding microstates, the parts of a self-build computer before you have assembled it, or the parts of a self-build computer after you have assembled it?

Again, the answer is that, according to Freedman & Young (and me!) both are the same.

125. 125

Chesterton:

“if evolution is incompatible with the 2nd Law (which it isn’t) why isn’t a Designer? Both (allegedly) cause otherwise improbable things to happen.”

Because a designer can “work” that is what you need to reduce entropy according with the 2nd Law.

So what then happens to the entropy of the designer?

126. 126
cantor says:

Liddle@124:
I think you have misunderstood Young & Freedman

I think you misunderstood my post #120.

There are M ways of arranging a 52-card deck that will result in “macroscopically describable order”.

There are 52!-M ways of arranging a 52-card deck that will result in “no macroscopically describable order”.

The probably of obtaining a deck with “macroscopically describable order” by random shuffling is M/52!

The problem, as you correctly identified in your post #111, is estimating a value of M. That is the key to the argument.

If M is vastly less than 52!, then the probability of a random shuffle resulting in a deck that has “macroscopically describable order” is vanishingly small.

127. 127

What does M stand for, above?

From your post it sounds like you mean some kind of specified sequence, or set of sequences – something like CSI.

But that has nothing to do with entropy. Which is the point I keep making.

There is nothing special about a sorted deck from the point of view of entropy, whether Shannon entropy, or as a metaphor for thermodynamic entropy.

A sorted deck has no more or less Shannon entropy than a shuffled deck. Entropy, contra too many bad textbooks, has nothing to do with order or disorder. It has to do with the number of ways (microstates) which the elements of the macrostate can be arranged. The deck of cards is a bad example because a card cannot change “state” (change from one face to another). However coins on a table can take two states. So to make it slightly more complex, let’s take dice.

If we lay a set of 6d ice on a table, sixes up, there is only one way – one microstate – of arranging the dice without turning them over. The “macrostate” description is “all sixes”. This description also gives us perfect information about the state of each die – we know that any one we point to, blindfolded, is six-up.

If we lay 3 of them ones up, and three sixes up, there are 20 ways in which we can arrange them without turning them over. The macrostate description now (“half ones, half sixes”), and less informative – if we point blindfold to a die, we have a probability of .5 of being correct (we know it is either a one or a six).

If we lay them down with 1 of each possible face up, the macroscopic description is: “1/6 ones, 1/6 twos, 1/6 3s, 1/6 fours, 1/6 fives, 1/6 sixes”. We now have only a 1 in 6 chance of being correct if we point blindfolded and guess.

The entropy of of the last macrostate is much higher than the first. As a result there is much less certainty about the state any die is in, and the information contained in the macrostate description is reduced.

But the entropy is not affected by shuffling the dice around, as long as you don’t turn them over.

This is why Granville is incorrect. He has mistaken the word “entropy” to mean something about how “ordered” or “disordered” the elements are, not how many possible microstates the system has. In terms of Shannon entropy, a town reduced to rubble, as long as nothing is added or missing, has the same entropy as the town before the tornado. It possibly has less literal thermodynamic entropy, as some things may be closer to the ground, but possibly more (sofas up trees). But the fact that it looks a mess has nothing to do with entropy.

128. 128

Essentially Granville has confused “order” as in “orderly sequence” with “order” as in “number of possible sequences”.

The sense in which low entropy means “order” is simply that a low entropy system has very few possible ways of rearranging itself (e.g. 2 fenders and 80 spark plugs).

It does not mean that it is arranged in some specific interesting way (fenders on the shelf, spark plugs in the drawers).

129. 129
Timaeus says:

Elizabeth (104):

A minor point about conversational stance:

“And cs3 is making exactly the same error!”

Note the difference in “flavor” between this statement and:

“And cs3 appears to me to be making the same error that I think you are making.”

The idea conveyed in both cases is roughly the same, but there is a difference in subtext. The subtext of the second is what you’ve said all along when questioned about your level of expertise: “I fully admit that I don’t have formal qualifications in X or Y. When I make statements in those fields, I am expressing my opinion, based on my own general knowledge of science and my own reasoning. I welcome differences of opinion as I may be in error. I am not asking for anyone to take me as an authority, and I don’t claim my judgment is the final word.”

The subtext of the first formulation is: “I have a full understanding of the relationship of the ideas of entropy, order, etc., as they are used by physicists, and I *know* (not just think) that Granville’s definitions and/or his application of them is wrong. And I have a full understanding of what cs3 wrote, and I am *certain* that none of the discussions he quotes from full-time physicists in any way conflicts with my own understanding of thermodynamics, and I am *certain* that his account does not in any way improve upon Granville’s, and I am *certain* that they are both wrong. My understanding of thermodynamics, which up until I started reading Sewell’s articles was only that of a non-physicist with a decent general knowledge of basic physics, is now, due to several quick lookups over the past 72 hours on the internet and in one or two textbooks I have lying around, that advanced and secure. I can now declare, as a qualified referee, who is right and wrong, and there is no danger at all that I, in being both referee and a partisan on one side, may be just slightly off in my verdict.”

What I’m trying to get you to see, Elizabeth, is how you sound to many people here; not just sometimes, not even just often, but pretty nearly *always*, on *any* scientific topic on which you make a judgment, whether you have a great deal of prior background or only started boning up on the subject for the first time in response to an ID article you don’t approve of. You never fail to be polite, which is great, compared to most ID critics, and we can all thank you for that; but you project an unwavering confidence in all your judgments, in your field or out of it, that perhaps unbeknownst to you generates resentment.

Indeed, this same quality of uncanny confidence in one’s judgment about a wide range of matters is seen in Matzke, who doesn’t hesitate to correct botanists on botany, render decisive verdicts on new ID books (no matter what the subject) within 48 hours, etc., and who has never (at least, on threads here where I have been present) granted a scientific point of any importance to any ID person who disagrees with him or modified any of his conclusions in the slightest (even on the characterization of certain ID proponents as creationists where he has been shown to be flat-out wrong). Always it is he who knows the science, he who has the best argument, and the others who either don’t know the science or don’t reason well.

Of course, your manners are much preferable to Matzke’s, which are thuggish (though not as obnoxious as those of Shallit, Myers, and Moran), but still, when the same person in every column on this website on which she posts is *always* the teacher, *always* the corrector, *always* the “Renaissance man” who can jump into any field and speak with a confidence which shows no visible difference to the confidence of the specialist, and is *never* the uncertain one, *never* the one who asks for intellectual help, etc., it gets wearying, especially when we all know that there are Nobel Prize winners and NAS members who are extremely deferential and hesitant in speaking out of their fields.

I do not know if you can sense how you might appear to others; you may believe that, since your own motives are untainted by ego, everyone else will see you that way. And I am not saying that your motives *are* egocentric; I’m merely saying something about what I believe to be a common personal reaction to your way of conversing. Take this for what it is worth. If it helps, make the necessary adjustments; if it doesn’t, well, I won’t repeat it.

130. 130

You are absolutely right, Timaeus, I should have phrased it as you suggested.

I apologise to cs3.

131. 131

Moreover, I will correct an(other) error of my own: I said that Shannon entropy was negatively correlated with thermodynamic entropy. That is not correct. It’s somewhat meaningless, but to the extent that it has meaning, it was incorrect. Shannon entropy has the same equation as thermodynamic entropy (give or take a constant), so when we use things like dice or coins as a metaphor for energy states, the answer in Shannon entropy will be a perfectly good metaphor for thermodynamic entropy.

What I should have said is that the amount of information contained in the macrostate description of a high entropy system tells you far less about the state of any one element than the same description of a low entropy system. So high thermodynamic entropy means low information in the description of the macrostate. It doesn’t mean low Shannon entropy.

132. 132
keiths says:

cantor:

What is it about the concept of X-entropy that you find so contemptuous? Talk math and physics please, if you are able.

Think of a mixture of gaseous nitrogen, helium, and radon. To any physicist, there would be a single entropy associated with the mixture. To Granville, there are at least four entropies and four applicable second laws! The nitrogen entropy could be high while the heat entropy is low, and the other two are who knows what. Each of the four follows its own version of the second law.

If you don’t understand why that is so ridiculous, then ask me and I will spell it out for you.

133. 133
cantor says:

keiths@132:
If you don’t understand why that is so ridiculous, then ask me and I will spell it out for you.

If you want to be taken seriously, lose the arrogance and attitude, and start making substantive posts instead of mockery.

134. 134
keiths says:

cantor,

I’ve made a substantive case for why Granville’s X-entropy concept is ridiculous. If you disagree with what I’ve written, then tell us exactly what you disagree with and why.

If you don’t understand why X-entropy is ridiculous, then ask and I will explain it further.

I understand that it upsets you to see Granville’s paper collapse under scrutiny, but the point of this thread is to examine the paper, warts and all.

The paper is on the table. It is open to criticism, and X-entropy is a concept well worth criticizing.

135. 135
cantor says:

Liddle@127:
What does M stand for, above?

M is a number. The precise value of M is unknowable, but an intuitive argument is made that it is vastly less than factorial(52).

Define two macrostates for the deck:

1) the order of the deck is “macroscopically describable”

2) the order of the deck is not “macroscopically describable”

The microstates are the factorial 52 different possible orderings of the deck. The first macrostate above has M associated microstates. The second has factorial(52) minus M microstates. The probability of getting macrostate1 for any random shuffle is M/factorial(52).

Shuffling the deck is now like flipping an unfair coin.

How you go about discerning “macroscopic describability” is a point of considerable contention (understatement).

136. 136
cantor says:

keiths@134:
I understand that it upsets you to see Granville’s paper collapse under scrutiny

137. 137
cantor says:

keiths@134:
I’ve made a substantive case for why Granville’s X-entropy concept is ridiculous.

Not in this thread you haven’t. If you’ve posted a calm, well-reasoned, thorough and articulate explication of your views on this matter elsewhere then link to it here.

138. 138
keiths says:

cantor,

Can you defend X-entropy, or are you just venting?

139. 139
cantor says:

keiths@138:
Can you defend X-entropy, or are you just venting?

That sort of dodging may work where you normally hang out, but it won’t fly here.

Can you give an informed, respectful, well-reasoned, articulate critique of X-entropy, or are you just venting?

140. 140
keiths says:

How about starting here, cantor. I wrote:

Think of a mixture of gaseous nitrogen, helium, and radon. To any physicist, there would be a single entropy associated with the mixture. To Granville, there are at least four entropies and four applicable second laws! The nitrogen entropy could be high while the heat entropy is low, and the other two are who knows what. Each of the four follows its own version of the second law.

Do you disagree with my characterization of X-entropy?

141. 141

Elizabeth B Liddle:

“So what then happens to the entropy of the designer?”

To ask about “the entropy of the designer” is like to ask “when Eternity goes to the cinema, does it pay the ticket?”. Please, if you are an intelligent Lady you cannot say such things… Entropy affects the physical only, not the metaphysical.

The anti-evolution argument from the 2nd law is simple. Are you evolutionists who obfuscate it, for you know well you are defending the indefensible. No need of speaking of entropy. In nuce the 2nd law says that *all* things go towards probability. Evolution says that *countless* things went towards improbability. How can they match?

Have you ever seen a system that self-adjusts? No.
Have you ever seen a system that self-constructs? No, a fortiori.
Have you ever seen a system that breaks down? Yes, countless times, every day.

All above *facts* are consequence of the 2nd law (and all contradict evolution). About the 2nd law, you evolutionists “open the systems” but unfortunately close your eyes…

142. 142
kairosfocus says:

EL:

With all reasonable respect, your response at 131 above shows that you are unfamiliar with the line of work that has come from Jaynes (and also Brillouin and others), as is summarised in, say, Robertson’s Statistical Thermophysics.

I actually excerpted some thoughts on that line of thinking in thermal physics, above at 96 – 97 from my always linked note, but as usual, you ignored and went on to — pardon my being more blunt that Timaeus — pontificate on a superficial base.

I will simply note my agreement with CS3 on the point that entropy crops up in any number of cases where energy and mass are subject to various arrangements at micro-level consistent with a macro-level (lab scale) description. This includes things like free expansion, diffusion, osmosis, viscosity, and more. Thermodynamics started with temperature but it has gone far beyond that once the statistical and informational perspectives came to bear. And, as Dr Sewell is pointing out, the diffusion concept and relationships linked to it lie at the heart of a lot of what is going on in thermodynamics — random walks leading to a strong tendency of spreading out from zones of high to those of low concentration — so his X-Entropy remarks have a point, though obviously they are not conventional terminology, cf. my box and marbles thought exercise and related remarks at 96 – 97 above.

That spreading out effect is closely tied though not simply equivalent to increased disorder, and a more probable cluster of states.

These points are also closely connected to the 500H coin case, and the natural tendency on shaking, to move from it to the more probable clusters close to the 250 H/T mean and with the HT pattern in no particular simply describable order [like an ASCII code message or the like], which yields an increase in entropy.

Similarly, precisely because it is so far from the overwhelming bulk of possibilities, it is empirically, practically unobservable on the gamut of our solar system for a box of 500 coins to spontaneously move to the 500 H microstate — a point which you and others of your ilk recently had such a hard time grasping, indeed outright consistently refused to acknowledge.

Which, is again close to the statistical form of the 2nd law.

In short, all of this is of a piece, and points to a systematic gap in your understanding or acceptance, one that is evidently driven by the reigning orthodoxy of our day that requires that in some warm pond or the equivalent, spontaneously, complex highly specific functional organisation somehow spontaneously gave rise to a metabolising, code-based information self replicating automaton, the living cell which then similarly manged to elaborate itself into the world of life by the Darwinian magic ratchet. To suggest plausibility for that, it has been suggested that inflow and outflow of mass and/or energy are adequate to account for such.

In fact, inflow of energy into a body is going to naturally increase accessible states and will tend to disorder the initial state, which in a system liable to “forget” its path to current state will then be lost and irrecoverable for practical purposes. As both the coins case and the marbles in a box case will suffice to show: there is a strong tendency to move to the clusters of states that are not simply describable and fit no particular specific configuration based functional pattern.

Anyway, an information-oriented summary of what entropy fundamentally means (one that I would now use in some form if I were to teach students in either comms theory or thermodynamics), is that it is a metric of the average missing information to specify the micro-level distribution of mass and energy in a body or system, as described at gross or macro [e.g. lab] level on typical macro- state variables. Thus, that absence of knowledge leads us to have to treat the state as to that extent random, and so we end up constrained by things like the Carnot theorem on maximum harvestable work, or the like.

KF

143. 143
kairosfocus says:

KS, re CS3: In one word, diffusion, cf. just above. Do you want to go through a box- and- marbles exercise with various colours, initially ordered in clusters then what happens after a time? Or, injection of a new colour into the mix and what happens after a time? KF

144. 144

KF: nobody is arguing that things do not spontaneously diffuse, or that “rubble” after a tornado is not infinitely more probable than a rearranged street of houses.

As I see it (and I am very willing to be shown to be wrong!), this is clearly true, simply because there are far more arrangements describable as “rubble” than there are arrangements describable as “street of houses”, and tornados blow every which way, which is why we call them tornados.

And as you suggest, a teabag added to a cup of water will quickly result in a uniformly pale brown fluid, because there are far more possible arrangements of more or less uniformly distributed tea molecules in the water than there are of arrangements in which they are bunched up in one place. If there is nothing pushing them in a specific direction, they will spread out, because left to their own devices, movement in any one direction is as probable as any other, and thus “back into the teabag” is much less probable than “not back into the teabag”. The entropy of the teabag will therefore increase, because there is now a mix of tea and water molecules in the bag, instead of just tea molecules, so there are now many more way of arranging the contents. Similarly the entropy of fluid outside the teabag has also increased, as there are now tea molecules as well as water molecules in the rest of the cup, instead of just water, and thus, again, more possible arrangements.

But that doesn’t mean it’s not possible to get the tea back into the teabag without violating the 2nd Law. If it were, we wouldn’t have instant tea (disgusting stuff though it is). It’s just that you have to do some work to get it there, and if you do, or something does, the entropy of that something will increase as a result of the work done. And that something doesn’t need to be a human instant-tea powder designer. If instead of tea in a cup, we consider a flash flood in a salt pan. Immediately after the flash flood, there is a layer of nearly pure salt at the bottom of what is now a lake, with nearly pure water above. Both salt layer and water layer have low entropy, because there are very few ways of rearranging the constituents of either to produce a different configuration (just as there are very few ways in which you can rearrange 99 Heads and 1 Tail without flipping the coins over). However, over time, the salt will dissolve in the water, and the salt molecules will distribute themselves uniformly in what is now a saline lake that has high entropy (lots of different ways of rearranging the salt and water molecules). Can the salt get back where it started, in a nice low entropy layer at the bottom? Sure. The sun can, and will, as it has done before, evaporate off the water, leaving a low entropy salt layer again. And the low entropy water layer can also be restored; as the earlier evaporated water condenses in the upper atmosphere, eventually it will fall as rain, resulting in another flash flood, at the beginning of which we will have, yet again, two local low entropy macrostates: salt at the bottom of the lake water; pure water above. No violation of the 2nd Law has occurred, because in evaporating off the water, entropy has increased in the atmosphere above (now the dry air has water molecules in it). Nor has any designer been involved. Yet the salt is “ back in the bag” as it were.

The 2nd Law does not forbid local decreases of entropy – segregation of uniformly saline fluid into layers of salt and water; snowdrifts; wind-piled leaves; even neat piles of sorted belongings in tidied-up houses. What it says is that any local decrease will be accompanied by an increase in the entropy of the surroundings that is at least as great, and usually greater, which is why I need my cup of tea and a biscuit after the rare occasions when I tidy the house.

Thus, whether evolution was the cause of biological complexity, or a designer, we have no reason to believe that a violation or suspension of the 2nd Law was involved. We do not have to make any kind of special pleading for the designer hypothesis, as niwrad does above, saying that, a designer is an OK explanation because “entropy affects the physical only, not the metaphysical”. This is because the inference that any physical cause would necessitate a violation of the 2nd Law is false. Just as a gust of wind can “undiffuse” a park full of scattered leaves into a pile in the corner, so can other material processes “undiffuse” atoms to form local macrostates with decreased entropy. If it were not so, saltpans wouldn’t exist.

(I’m aware that I have ignored the cooling of the water in the teacup, as well as the cooling by evaporation in the salt pan, and by the dissolving of the salt – but my point is that even if we define entropy very simply, as the number of possible rearrangements of the elements of a local system, it is clear that there is no reason to think that “sorting” processes that result in local decreases of entropy violate the 2nd Law, nor that they cannot occur in the absence of living things. And of course, it is the contention of “evolutionists” that evolution is a sorting process).

145. 145
Chesterton says:

“So what then happens to the entropy of the designer?”

The designer could be a machine that uses the diffusing energy of the universe to do work, as our bodies.

146. 146

Have you ever seen a system that self-adjusts? No.

Yes. I assume you mean apart from living things, though. Answer is still yes. Non-living homeostatic systems exist. A good example is the albedo effect of a snow field. Snow reflects heat, keeping the air cool, and thus maintaining the depth of snow. That’s why loss of snowfields is so worrying – we may be losing a major mechanism for maintaining the planet’s temperature.

The Oklo natural nuclear reactor is (or was) another. Old Faithful is another.

Have you ever seen a system that self-constructs? No, a fortiori.

Sure. Again, living things self construct of course, but what about crystals? Or sand dunes?

Have you ever seen a system that breaks down? Yes, countless times, every day.

Yes indeed. But nobody is arguing that the 2nd Law is false. All we are saying is that Granville’s assertion that if evolution were true, the 2nd Law would have been violated, is false.

147. 147

chesterton:

The designer could be a machine that uses the diffusing energy of the universe to do work, as our bodies.

Sure. Or chemical reactions that use the same energy. You may disagree that this is possible, but it’s not the 2nd Law that prevents it.

148. 148
fifthmonarchyman says:

I’d like to thank everyone for this thread. It has been very informative to me. As a lurker I did not understand the argument from the second law and it sounded as if the critics had a slam dunk objection against it.

Now I get it and although I don’t think that it will convince folks like Keiths or Elizabeth It seems to me to be a sound argument that is not effectively countered by talking about “compensation” from the sun.

thanks again

149. 149

Elizabeth B Liddle

All your examples of “self-adjusting/self-constructing systems” are trivial natural phenomena where sort of feedback is at work. Nothing to do with real CSI machinery able to self-adjust or self-construct itself or some of its CSI parts.

“Living things self construct of course.”

Self-construction or self-organization are dreams of evolutionists. Not the least CSI system did or does that.

“All we are saying is that Granville’s assertion that if evolution were true, the 2nd Law would have been violated, is false.”

But of course Prof. Sewell is right, if evolution were true the 2nd law would have been violated. Analogy: a general orders that all soldiers in the universe go right. All the armies on the Earth go left. Don’t you say the command has been violated?

However I like you when at the question: “Have you ever seen a system that breaks down? Yes, countless times, every day.”

“Yes indeed. But nobody is arguing that the 2nd Law is false.”

Ah, sounds a progress by you. Finally you recognize the 2nd Law has to do with things breaking down. Sometimes this is the first step to abandon evolutionism. A friend of mine did so. Something tells me that, if you continues staying with us at UD, some day you will pass to the ID side. Welcome in advance. 🙂

150. 150
Granville Sewell says:

KS

Think of a mixture of gaseous nitrogen, helium, and radon. To any physicist, there would be a single entropy associated with the mixture. To Granville, there are at least four entropies and four applicable second laws! The nitrogen entropy could be high while the heat entropy is low, and the other two are who knows what. Each of the four follows its own version of the second law.

Footnote 1 in my Bio-Complexity paper contains a quote from “Two Essays on Entropy,” by R. Carnap, Univ. California Press 1977:

There are many thermodynamic entropies, corresponding to different degrees of experimental discrimination and different choices of parameters. For example, there will be an increase in entropy by mixing samples of O-16 and O-18 only if isotopes are experimentally distinguished.

And from the last paragraph of section 3 of my paper:

Bob Lloyd’s primary criticism [Mathematical Intelligencer piece written to rebut my “unpublished” AML paper!] of my approach was that my “X-entropies” (e.g.”chromium entropy”) are not always independent of each other. He showed that in certain experiments in liquids, thermal entropy changes can cause changes in the other X-entropies. Therefore, he concluded, “the separation of total entropy into different entropies…is invalid.” He wrote that the idea that my X-entropies are always independent of each other was “central to all of the versions of his argument.” Actually, I never claimed that: in scenarios A and B, using the standard models for diffusion and heat conduction, and assuming nothing else is going on, the thermal and chromium entropies are independent, and then statement 1b nicely illustrates the general statement 2b (though I’m not sure a tautology needs illustrating). But even in solids, the different X-entropies can affect each other under more general assumptions. Simple definitions of entropy are only useful in simple contexts…

In an earlier comment, KS, you said “even Granville seems to realize now, to his credit, that his paper is bad” or something like this, I can’t find the comment now, maybe you removed this. But the fact that you can outlast me doesn’t mean I am conceding that I am wrong. I just don’t have to time to answer criticism after criticism, for days on end, that are already addressed in my papers.

151. 151
Chesterton says:

“Sure. Or chemical reactions that use the same energy. You may disagree that this is possible, but it’s not the 2nd Law that prevents it.”

Chemical reactions tend to the equilibrium, life tends to a non equilibrated system. albedo effect, crystals, dunas nuclear reactions all tend to the equilibrium. To avoid the tendency to the equilibrium you need not energy but “work”. And to do work you need a machine. Life forms are machines that avoid the equilibrium.

152. 152
keiths says:

Granville,

In an earlier comment, KS, you said “even Granville seems to realize now, to his credit, that his paper is bad” or something like this, I can’t find the comment now, maybe you removed this.

That’s because Lizzie said it, not me:

It must be clear to pretty well everyone now that his argument is a mess (it seems also to becoming clear to Granville, which is to his credit).

Granville:

But the fact that you can outlast me doesn’t mean I am conceding that I am wrong. I just don’t have to time to answer criticism after criticism, for days on end, that are already addressed in my papers.

However, “I’m right but I’m too busy to explain why” isn’t believable.

153. 153

Elizabeth B Liddle
All your examples of “self-adjusting/self-constructing systems” are trivial natural phenomena where sort of feedback is at work. Nothing to do with real CSI machinery able to self-adjust or self-construct itself or some of its CSI parts.

Yes, of course they are natural phenomena where some sort of feedback is at work. Feedback is the key to homeostasis. And whether they are “trivial” or not is scarcely the point. Biology is chock full of feedback homeostatic mechanisms, but I assumed you meant some non-biological example. As for CSI, that isn’t what you asked for, and I am not going to give you an example because I think CSI is a circular concept.

“Living things self construct of course.”
Self-construction or self-organization are dreams of evolutionists. Not the least CSI system did or does that.

Why did you snip my sentence there? I said: “Again, living things self construct of course, but what about crystals? Or sand dunes?” Do they not “self construct”? If not, who constructs them?

All we are saying is that Granville’s assertion that if evolution were true, the 2nd Law would have been violated, is false.”
But of course Prof. Sewell is right, if evolution were true the 2nd law would have been violated. Analogy: a general orders that all soldiers in the universe go right. All the armies on the Earth go left. Don’t you say the command has been violated?

And my argument is that Sewell’s assertion is incorrect. The 2nd Law does not forbid local decreases in entropy, gained at the cost of increased entropy in the surroundings, which is all biological organisms represent, and no more than evolution is posited to do. The world abounds with both living and non-living instances of local decreases of entropy. Why should we think that evolutionary processes are anything other than another example?

However I like you when at the question: “Have you ever seen a system that breaks down? Yes, countless times, every day.”
“Yes indeed. But nobody is arguing that the 2nd Law is false.”
Ah, sounds a progress by you. Finally you recognize the 2nd Law has to do with things breaking down. Sometimes this is the first step to abandon evolutionism. A friend of mine did so. Something tells me that, if you continues staying with us at UD, some day you will pass to the ID side. Welcome in advance.

Of course the 2nd Law is true, and of course it says that entropy tends to increase overall. But that doesn’t mean that nothing can ever decrease in entropy, for example, repair itself. We know it can, and when it does, the 2nd Law is not violated.

154. 154
keiths says:

Granville,

Your defenses of “X-entropy” don’t succeed.

In Carnap’s example, he is not claiming that there is an “O-16 entropy” and a “O-18 entropy”, as you are. He is saying that when you measure the thermodynamic entropy, you get a different value depending on whether your experiment distinguishes the isotopes.

Regarding your second defense, the very fact that X-entropies are not independent (as you concede) is why your attempted refutation of the compensation argument fails.

In your paper, you argue that

Importing thermal order into a system may make the temperature distribution less random, and importing carbon order may make the carbon distribution less random, but neither makes the formation of computers more probable.

Here you repeat the error of thinking that an assembled computer necessarily has less entropy than its unassembled parts, but that’s not my point, so let’s set it aside.

If the X-entropies are not independent, as you admit, then you don’t have to import “carbon order” to reduce the “carbon entropy”.

This should be obvious. Suppose I have a gaseous mixture of oxygen and carbon dioxide that I wish to separate. I set up a chiller and cool the mixture until the carbon dioxide solidifies into dry ice. The “oxygen entropy” has decreased, as has the “carbon dioxide entropy”.

Now suppose I run this apparatus using solar cells. The incoming sunlight doesn’t contain “oxygen order” or “carbon dioxide order”, yet I have managed to decrease both the “oxygen entropy” and the “carbon dioxide entropy”.

That is, the compensation argument works just fine even in terms of your bogus “X-entropy” concept.

155. 155
Granville Sewell says:

KS,
OK, my mistake, I thought the comment about me finally realizing I was wrong had been made by you. No wonder I couldn’t find it this morning.

However, “I’m right but I’m too busy to explain why” isn’t believable.

OK, you are right. If I thought there was one chance in a million of changing your mind I could probably find the time to continue responding for a while, but don’t want to spend time on an impossible mission. Anyone who can believe that 4 fundamental, unintelligent forces of physics alone can rearrange atoms into Apple Ipods will not be impressed by any other arguments I make.

156. 156
CS3 says:

Sewell’s argument basically boils down to two statements:

1)Natural forces do not do things that are macroscopically describable that are extremely improbable from the microscopic point of view.
2)Statement 1 holds whether a system is isolated or open; when it is open, you just have to also consider what is entering or leaving the system when deciding what is or is not extremely improbable.

Statement 1 derives from two sources: the principle that particles obey the four fundamental forces, and the idea that, of all the microstates equally likely given the constraints of the four fundamental forces, macrostates with more microstates are more probable. For example, when only diffusion is operative, all positions within the volume are equally likely, so a uniform distribution is the most probable macrostate. A macrostate with few microstates will be achieved only if the four fundamental forces make that microstate not improbable – for example, a magnet moves magnetic particles initially uniformly distributed in a volume all to one side of the volume.

Statement 2 derives from logic and common sense, although he also proves it analytically for the simple case of diffusion.

The energy formulations of the Second Law are related to the concept of microstates and macrostates. From University Physics, continuing the quotation from my earlier post:

The relationship between entropy and the number of microscopic states gives us new insight into the entropy statement of the second law of thermodynamics, that the entropy of a closed system can never decrease. From Eq. 18-22 (S=k lnw) this means that a closed system can never spontaneously undergo a process that decreases the number of possible microscopic states.

Classical and Modern Physics gives three formulations:

1. In an isolated system, thermal entropy cannot decrease.

2. In an isolated system, the direction of spontaneous change is from order to disorder.

3. In an isolated system, the direction of spontaneous change is from an arrangement of lesser probability to an arrangement of greater probability.

Thus, I see no reason that it is not fair to say that Sewell’s argument is based on at least “the underlying principle behind the Second Law.” In any event, Asimov, Styer, and Bunn certainly aren’t just talking about energy, and Sewell is responding to them on their terms.

Just like the question of whether ID is “science” ultimately just depends on your definition of “science” and is irrelevant to its truthfulness, debating whether Sewell’s argument is based on what you consider the “Second Law” is an irrelevant distraction to its truthfulness. If you want to define the “Second Law” to exclude anything not explicitly about energy, and dismiss the countless textbooks and journal papers that use a broader definition as “confused”, fine. If you prefer, assume he is just using logic and probability, not the “Second Law.” Either way, if the two statements I listed at the beginning of this post are true, it refutes the compensation argument.
Does anyone seriously dispute that improbable events cannot be made more probable by something causally unrelated to those events happening which, if reversed, would be even more improbable?

157. 157

Elizabeth B Liddle

“I think CSI is a circular concept.”

CSI is organization, an engine is a fully organized system, is an engine – i.e. a physical instantiation of organization – a circular concept?

“Again, living things self construct of course, but what about crystals? Or sand dunes? Do they not “self construct”? If not, who constructs them?”

To speak of “construction” about crystals and dunes is exaggerated. Anyway, they do not self-construct at all. Their behaviour is potentially contained just from the beginning in the qualities of matter/energy and the natural laws, obviously all designed by the Designer of the cosmos.

“The 2nd Law does not forbid local decreases in entropy, gained at the cost of increased entropy in the surroundings, which is all biological organisms represent, and no more than evolution is posited to do.”

No, decrease in entropy is not “all biological organisms represent”. Organisms represent organization. Simple decrease in entropy is not organization. This a key point. Evolutionists jocundly use “entropy” as a “free lunch”: entropy increases there, so entropy decrease here and organisms arise here at zero cost, while the 2nd law is safe. Too good to be true. Since entropy is related to disorder, then I cause a big mess (easy) there to get organization (difficult) here. Do you see the nonsense?

158. 158
kairosfocus says:

EL: Kindly stop erecting a strawman distortion. What GS has been saying is closely related to how diffusion works. Indeed, he is pointing out that concentrations of heat will do an equivsalent of diffusing, and so he is in effect using this as a broader mathematical construct. KF

159. 159

Granville:

OK, you are right. If I thought there was one chance in a million of changing your mind I could probably find the time to continue responding for a while, but don’t want to spend time on an impossible mission. Anyone who can believe that 4 fundamental, unintelligent forces of physics alone can rearrange atoms into Apple Ipods will not be impressed by any other arguments I make.

But that is not the argument we are currently dealing with. You may be right that the four fundamental forces cannot make an iPod. But that is not a valid inference from the 2nd Law of Thermodynamics, or even from probability; it is an argument from incredulity. We know perfectly well that the entropy of local systems can increase. Therefore evidence for a local increase of entropy is neither evidence that the 2nd Law has been violated, nor is it evidence that an entropy-immune Designer must have been involved.

You have said that if evolution were true, then there must have been a violation of the 2nd Law by evolutionary processes.

That is a mistake. I’m still not sure how you managed to make it, and I’m not even clear whether you agree or disagree that local entropy increases are possible. Do you?

If you don’t, why is there a problem?

If you don’t, how do you explain the abundant examples in both the living and non-living world of local decreases in entropy?

You seem to have forgotten that what you are calling “improbable” arrangements are only “improbable” under the assumption that all that can happen in the world is movement of matter in equiprobable directions.

This is clearly false. If it were true, tornadoes would wildly improbable! They represent massive reductions in entropy!

And tornadoes are not only possible, they reliably appear every year in the US.

How do you explain them, if your inference is correct?

160. 160

KF

Kindly stop erecting a strawman distortion. What GS has been saying is closely related to how diffusion works. Indeed, he is pointing out that concentrations of heat will do an equivsalent of diffusing, and so he is in effect using this as a broader mathematical construct.

I know that what GS is saying is related to diffusion. I know he is referring to some “broader mathematical construct”. I’m saying it doesn’t add up.

Tell me: how do you account for tornadoes if things always diffuse towards greater uniformity?

161. 161

Cs3:

Sewell’s argument basically boils down to two statements:
1)Natural forces do not do things that are macroscopically describable that are extremely improbable from the microscopic point of view.
2)Statement 1 holds whether a system is isolated or open; when it is open, you just have to also consider what is entering or leaving the system when deciding what is or is not extremely improbable.

Statement 1 does not appear to me to be a statement of the 2nd Law. It seems to be some kind of shoehorning of Dembskian CSI into the 2nd Law.

Where are you getting the “macroscopically describable” part from? The point surely is that if a system’s macroscopic description applies to many microstates it has more entropy than if it applies to few.

Just like the question of whether ID is “science” ultimately just depends on your definition of “science” and is irrelevant to its truthfulness, debating whether Sewell’s argument is based on what you consider the “Second Law” is an irrelevant distraction to its truthfulness.

Well, not so much. Sewell claims that if evolution is true, the 2nd Law of thermodynamics has been violated. If Sewell means something other than the 2nd Law of thermodynamics, then he needs to rewrite his argument.

It’s not as though the 2nd Law is ambiguous.

And when you take that part out, he’s left with Dembski’s CSI argument, minus the part about modelling the null to take into account “Darwinian and other material mechanisms”.

162. 162
Thomas2 says:

ks (at 153) –

I am a novice here, but I have an interest in whether or not mindless evolution does in fact violate the 2nd Law or not, so I’d be grateful for view on whether the following is on the right track:

For your apparatus to work, the solar cell will have to power a heat pump, and the operational efficiency of the heat pump will provide the necessary compensation.

For undirected (blind, mindless) evolution to work without violating the 2nd Law, natural selective processes (successful competition between organisms with differential functionality-selectivity-fecundity, simplifying and ignoring luck) will presumably provide the equivalent role of the heat pump.

Entropy (or X-entropy) will, however, be quantified by an appropriate measure of complexity, not functionality-selectivity-fecundity (since what we are concerned with here is the unplanned/unintelligent/mindless development of “organised”, or “specified”, complexity).

Thus, in order to demonstrate that undirected evolution works without violating the 2nd Law and that natural selective processes can indeed supply the role of a heat pump, it needs to be demonstrated empirically that there exists (on average) a significant positive correlation between appropriately quantified increases in functionality-selectivity-fecundity and appropriately quantified increases in complexity.

Is there any empirical evidence which would reliably suggest this?

163. 163
keiths says:

Granville,

OK, you are right. If I thought there was one chance in a million of changing your mind…

The probability approaches one as the strength of your argument increases. If your argument remains as weak as it is now, then the odds remain low. It’s up to you to deliver.

I understand that it makes you feel better to pretend that our minds are closed and that Lizzie and I have dismissed your arguments out of hand. Anyone reading the thread can see how ludicrous that idea is.

For example, I have taken the time to understand and ponder your “X-entropy” concept. I explain above why it doesn’t make sense, and I conclude that

…the compensation argument works just fine even in terms of your bogus “X-entropy” concept.

Do you disagree with my critique? If so, then tell me exactly what you disagree with, give your reasons for disagreeing with it, and present a counterargument in favor of your position.

It’s called debate, Granville.

164. 164
cantor says:

I was about to commend you for the lucidity of your post #153 (with the exception of the gratuitous “concede” and “bogus”) and then you had to go and spoil it by posting this.

165. 165
Thomas2 says:

EL (at 158) –

It seems surprising that an argument against unintelligent or blind evolution should be countered as “an argument from incredulity”.

This phraseology seems to suggest that skeptics should uncritically acquiesce to arguments from credulity, and that Darwinian evolutionary science relies on arguments from gullibility!

Science requires adequate positive evidence for its claims, and is required to be accessible to proper scrutiny: “he who asserts must prove”. A healthy skepticism should be welcomed, not disparaged, surely?

(I don’t mean this unkindly – I appreciate your posts).

166. 166
keiths says:

cantor,

Still venting, I see.

But since we are discussing “X-entropy”, what do you think of the concept?

167. 167
cantor says:

since we are discussing “X-entropy”, what do you think of the concept?

I think I care more about respect and manners than I do about X-entropy.

168. 168
keiths says:

cantor,

Interesting that you are offended by my use of the word “bogus” to describe Granville’s “X-entropy” concept, but not by Granville’s characterization of me as closed-minded with a less than one-in-a-million chance of changing my mind.

You’re not just an acutely sensitive tone troll, you’re a biased one.

Please thicken your skin and work on your bias problem. If you’re unwilling, then perhaps the Internet is not the right place for you.

169. 169
cantor says:

Perhaps you are just a very young man who hasn’t yet had his sharp edges smoothed by the rough road of life.

Or perhaps you’re older and suffer from arrested development.

Or maybe you’re just an internet warrior, and in real life you are much more courteous and respectful. I hope so, for your own sake.

170. 170

Cantor: as you and Granville seem to be arguing that what is crucial here is that improbable arrangements of matter will not form spontaneously, can you give me a clear account of how you are defining and computing the probability of an arrangement?

It seems to depend on some value “M” which if I understand you correctly, stands for “macroscopically describable”. I’m not sure if Granville agrees here, but I would like to know what you mean by this, and how it relates to any form of the 2nd Law of thermodynamics that you think is relevent to Granville’s argument.

171. 171
cantor says:

re: Liddle@167

To be clear, I am not defending Granville’s paper. I am interested in calmly exploring where it is right and where it is wrong.

The X-entropy argument always puzzled me, because I could think of counter-examples (very similar to keiths’ O2 CO2 one).

I think the questions you asked I have already answered in post 135. It’s possible you missed it in the flurry of posts about that time.

172. 172
CS3 says:

So let’s see if I have this all straight.

1) Isaac Asimov publishes an article in the Smithsonian Institute Journal, entitled “In the game of energy and thermodynamics, you can’t even break even”, arguing that, when applying the Second Law of Thermodynamics to the improbability of organisms due to evolution, there is no conflict because the increase in the complexity of organisms is compensated by the increase in entropy of the Sun. To quote the article itself:

You can argue, of course, that the phenomenon of life may be an exception [to the second law]. Life on earth has steadily grown more complex, more versatile, more elaborate, more orderly, over the billions of years of the planet’s existence. From no life at all, living molecules were developed, then living cells, then living conglomerates of cells, worms, vertebrates, mammals, finally Man. And in Man is a three-pound brain which, as far as we know, is the most complex and orderly arrangement of matter in the universe. How could the human brain develop out of the primeval slime? How could that vast increase in order (and therefore that vast decrease in entropy) have taken place? Remove the sun, and the human brain would not have developed…. And in the billions of years that it took for the human brain to develop, the increase in entropy that took place in the sun was far greater; far, far greater than the decrease that is represented by the evolution required to develop the human brain.

2) Daniel Styer publishes an article in the American Journal of Physics, entitled “Entropy and Evolution”, arguing that, when applying the Second Law of Thermodynamics to the improbability of organisms due to evolution, there is no conflict because the increase in the complexity of organism is compensated by the increase in entropy of the cosmic microwave background. His paper is a quantitative version of the compensation argument frequently made in textbooks and by prominent Darwinists such as Isaac Asimov and Richard Dawkins. To quote the article itself:

Does the second law of thermodynamics prohibit biological evolution?…Suppose that, due to evolution, each individual organism is 1000 times “more improbable” than the corresponding individual was 100 years ago. In other words, if Ui is the number of microstates consistent with the speci?cation of an organism 100 years ago, and Uf is the number of microstates consistent with the speci?cation of today’s “improved and less probable” organism, then Uf = 10^-3Ui.

Presumably the entropy of the Earth’s biosphere is indeed decreasing by a tiny amount due to evolution, and the entropy of the cosmic microwave background is increasing by an even greater amount to compensate for that decrease. But the decrease in entropy required for evolution is so small compared to the entropy throughput that would occur even if the Earth were a dead planet, or if life on Earth were not evolving, that no measurement would ever detect it.

3) Emory Bunn publishes an article in the American Journal of Physics, entitled “Evolution and the Second Law of Thermodynamics”, arguing that, when applying the Second Law of Thermodynamics to the improbability of organisms due to evolution, there is no conflict because the increase in the complexity of organisms is compensated by the increase in entropy of the cosmic microwave background. His estimate of the improbability of life due to evolution is “more generous” than Styer’s. To quote the article itself:

We now consider (dS/dt)life. .. far from being generous, a probability ratio of Ui/Uf = 10^-3 is probably much too low. One of the central ideas of statistical mechanics is that even tiny changes in a macroscopic object (say, one as large as a cell) result in exponentially large changes in the multiplicity (that is, the number of accessible microstates). I will illustrate this idea by some order of magnitude estimates. First, let us address the precise meaning of the phrase “due to evolution.” If a child grows up to be slightly larger than her mother due to improved nutrition, we do not describe this change as due to evolution, and thus we might not count the associated multiplicity reduction in the factor Ui/Uf. Instead we might count only changes such as the turning on of a new gene as being due to evolution. However, this narrow view would be incorrect. For this argument we should do our accounting in such a way that all biological changes are included. Even if a change like the increased size of an organism is not the direct result of evolution for this organism in this particular generation, it is still ultimately due to evolution in the broad sense that all life is due to evolution. All of the extra proteins, DNA molecules, and other complex structures that are present in the child are there because of evolution at some point in the past if not in the present, and they should be accounted for in our calculation… We conclude that the entropy reduction required for life on Earth is far less than |dS life| ? 10^44k… the second law of thermodynamics is safe.

4) Granville Sewell submits a paper to Applied Mathematics Letters in response to this compensation argument. (It is peer-reviewed and accepted, but later withdrawn “not because of any errors or technical problems found by the reviewers or editors.”) His central claim is, “if an increase in order is extremely improbable when a system is isolated, it is still extremely improbable when the system is open, unless something is entering (or leaving) which makes it not extremely improbable.”

5) Bob Lloyd publishes a viewpoint in the Mathematical Intelligencer disagreeing with Sewell’s withdrawn article. Lloyd backs Styer and Bunn, arguing that, when applying the Second Law of Thermodynamics to the improbability of organisms due to evolution, there is no conflict because the increase in the organism complexity is compensated by the increase in entropy of the cosmic microwave background. To quote the article itself:

The qualitative point associated with the solar input to Earth, which was dismissed so casually in the abstract of the AML paper, and the quantitative formulations of this by Styer and Bunn, stand, and are unchallenged by Sewell’s work.

6) A paper similar to the withdrawn AML paper is published in the proceedings of the Biological Information – New Perspectives conference.

7) Critics on UD are outraged that Sewell is such an idiot for applying the Second Law of Thermodynamics to the improbability of organisms due to evolution, when (paraphrasing) “every physicist on earth knows it is only applicable to energy”. Some say he would be “laughed out of any reputable physics conference.” It is just “bad science.” Some feel the sabotage of the original publication is completely justified. Some would go so far as to burn the books.

8) So far, no reports of demands for retraction of Asimov, Styer, Bunn, or Lloyd’s articles. No reports of Smithsonian Institute Journals, American Physics Journals, or Mathematical Intelligencers burning in the streets.

Is there a double standard?

173. 173
keiths says:

CS3,

Is there a double standard?

No.

What I’ve been saying, and I think Lizzie would agree, is that the second law has no more to do with evolution than the first law, or the zeroth law, or Newton’s laws of motion, or Coulomb’s law.

All of these laws apply to every physical process — otherwise they wouldn’t be laws! — but they have no particular relevance to discussions of evolution because evolution doesn’t violate any of them.

The only reason that Asimov, Bunn, Styer and Lloyd tackled the topic was to debunk the common misconception among creationists and IDers that evolution does violate the second law.

Absent that misconception, there is no reason to bring the second law into discussions of evolution versus ID.

174. 174

keiths @170:

Nice try. Thermodynamics as it relates to evolution is either a live topic or not. If all these folks CS3 cited (plus the article Matzke referred us to on this very issue that I mentioned) think it is worth discussing, then it is certainly fine for Granville to respond to those kinds of papers and say that the alleged answers fail.

Your last sentence is clearly false. There are scholars who have addressed the thermodynamics issue on its own merits, absent any impetus to debunk creationists.

Surely you aren’t arguing that all those papers CS3 cited make the argument that thermodynamics are irrelevant? Rather, at least based on the summaries CS3 provided, they appear to be saying thermodynamics are very much relevant, but the authors think they can be compensated for because of some alleged openness of the system, changes in background radiation, etc.

Let’s drop the double standard. Either thermodynamics are relevant or they are not. Lots of scholars seem to think they are. Maybe you and Elizabeth should get on board; or at the very least consider that perhaps it is an issue that merits attention. Then we can talk about whether or not the alleged approaches to deal with the thermodynamics issue are rational and successful.

175. 175
cantor says:

Multiple choice:

1) evolution violates the 2nd Law

2) evolution does not violate the 2nd Law

3) it’s impossible to say either way

176. 176
keiths says:

Eric,

Nice try. Thermodynamics as it relates to evolution is either a live topic or not.

Oh, it’s definitely a live topic! For people like Granville, it’s a live topic because they actually think (!) that thermodynamics somehow forbids evolution. For people like Asimov, Bunn, Styer, Lloyd, Lizzie, and me, it’s a live topic because so many people share Granville’s misconception, to our great dismay.

If all these folks CS3 cited (plus the article Matzke referred us to on this very issue that I mentioned) think it is worth discussing, then it is certainly fine for Granville to respond to those kinds of papers and say that the alleged answers fail.

It’s perfectly fine for Granville to write any kind of paper he wants. The paper he wrote, however, is so shoddy and full of mistakes from stem to stern that it should never have been accepted by the organizers of the BI symposium and it certainly never deserved to see the inside of a serious scientific publication. Granville is mistaken to think that the second law poses any kind of a problem for evolution.

Your last sentence is clearly false. There are scholars who have addressed the thermodynamics issue on its own merits, absent any impetus to debunk creationists.

A misconception is a misconception regardless of who holds it, Eric. If the scholars you have in mind tackled the issue because they thought it was unresolved, they were mistaken. If they tackled it because they knew it was widely misunderstood and wanted to dispel the confusion, then good for them.

Surely you aren’t arguing that all those papers CS3 cited make the argument that thermodynamics are irrelevant? Rather, at least based on the summaries CS3 provided, they appear to be saying thermodynamics are very much relevant, but the authors think they can be compensated for because of some alleged openness of the system, changes in background radiation, etc…

Either thermodynamics are relevant or they are not. Lots of scholars seem to think they are. Maybe you and Elizabeth should get on board; or at the very least consider that perhaps it is an issue that merits attention.

Thermodynamics is relevant to evolution in exactly the same sense that it is relevant to knitting. Neither activity is allowed to violate the second law, and neither activity does. Beyond that, there is no reason to bring up the second law when discussing knitting or evolution. No reason, that is, unless people start latching on to the idea that they do somehow violate the second law.

Fortunately, the knitters of the world do not have to deal with uninformed people who think that knitting violates the second law (although DaveScot famously stated that he violated the second law every time he typed something).

If only evolutionary biologists and physicists were as fortunate as the knitters!

Then we can talk about whether or not the alleged approaches to deal with the thermodynamics issue are rational and successful.

177. 177

CS3:

I think you make a fair point, and indeed I do think that the rebuttals you cite are not in fact rebuttals of Granville’s 2nd Law argument against evolution. I think they are rebuttals of a different argument (possibly not one in fact made) namely, that in the absence of any external energy input, the Earth would be cooling, not heating, and therefore increasing in entropy, not reducing. And, self-evidently, we observe entropy reductions on earth – things get hotter, and indeed, trees grow from seeds, and bacteria from other bacteria resulting in an increase of energy stored in a form capable of doing useful work, as we demonstrate every time we use fossil fuels, or indeed, make a bonfire.

This however does not mean that the 2nd Law has been violated, because when the sun heats the earth it is not “passing heat from a cooler to a hotter” but the reverse.

However, Granville correctly points out that this is not his argument, and as keith correctly pointed out to me – the reason trees can grow is not because the sun is increasing in entropy (although it is), but becauses there is a local decrease in entropy (the conversion of carbon dioxide and water into sugar) accompanied by a local decrease in entropy (the air cools). The non-violating sun to earth transfer isn’t the issue here, although it makes the sugar production possible; the puzzle that needs to be solved is how heat can be passed from “a cooler to a hotter” here on earth.

Another way of putting this is to say: how can there be subtantial local entropy increases, i.e. how can there anything other than diffusion towards uniform mixing, if the 2nd Law holds?

Or, to take Granville’s example: why do we consider the reduction of a town to rubble by a tornado perfectly unremarkable, but would consider the restoration back to order by a second tornado utterly extraordinary?

Leaving aside the 2nd Law for a moment, I’d say: “restoring something to order” is in any case not unique to living beings. Sure we know that people can restore a town to order, but, as my saltpan example showed, order can be restored (in the sense that what was diffused can become undiffused again) in a salt pan after a flood. Indeed, saltpans are the result of a repeating cycle of flooding and evaporation, in which phases of diffusion of salt into water to produce a substantially uniformly saline fluid, is followed by phases of separation and reflooding, at the peak of which, we have, again, salt and water separated into two undiffused volumes. The genie can be put back into the bottle.

The same applies to tornadoes themselves. In still air, the directions of movement of any molecule at any time is equiprobable. At any moment, molecules moving upwards are no more prevalent than molecules moving sideways, or downwards or diagonally, east, west, north or south. Directions are uniformly distributed throughout the air – there is no spatial autocorrelation between directions of movement. However, in a tornado, this is spectacularly reversed – there is massive spatial autocorrelation between the direction of movement of air molecules – the directions are massively undiffused! And yet still air can readily form a vortex, just as a vortex can revert to being still air.

In other words, there is plenty of precedent, with no recourse to anything other than the “four fundamental forces” for things to undiffuse, as well as to diffuse.

How can this be? One of the expressions of the 2nd Law that Granville cites in his BIOcomplexity paper is:

3. In an isolated system, the direction of spontaneous change is from an arrangement of lesser probability to an arrangement of greater probability

At first glance this is a rather odd statement, and seems merely to be stating, tautologically, that: “more probable arrangements are more probable than less probable arrangements”. The reason it isn’t is that, as I keep trying to say, “probability” is not a property of an arrangement, but of the process that generated the arrangement. Here are two arrangements of Hs and Ts:

TTTHTTHHTT

TTTHTTHHTT

Which is more “probable”? “But they are both the same!” you say! But I will now tell you that I generated the first by using the formula, =IF(RAND()> 0.5,”H”,”T”) in Excel and pasting the results into the post, and I generated the second by carefully copying the first into a new line. So the probability of getting the first pattern is 1/(2^10), whereas the probability of getting the second is near 1 (if I’d used cut and paste, it would have been 1, but I relied on hand-typing it).

In other words, the probability of an arrangement is not discernable from looking at the arrangement, but by computing the probability of that arrangement, given a generative process.

So to look again at that that 3rd statement of the 2nd Law quoted by Granville: it is not as silly as it looks at first. A clearer way of saying the same thing would be something like:

“In a set of items in which an items is as likely to adopt one state, or move in one direction as in any other, the the direction of spontaneous change in the uniformity of the arrangment will be from lesser to greater uniformity, because the number of possible arrangements rises monotonically with degree of uniformity”.

An example would be of a set of spherical, in a tray being jiggled energetically. If the beads start off in an nice close-packed arrangement, the direction will be in the direction of uniformly distributed beads because there are simply more arrangements of uniformly distributed beads than there are of more closely packed beads, And because the possible uniform arrangements are more numerous they are also more probable

However, this is an “isolated system”, consisting of a tray, beads and jiggling. If something from the outside comes along and raises one end of the tray (“does work” on the tray), now the probabilities change dramatically: now, a uniform distribution of beads becomes extremely unlikely and the most probable arrangement is a close-packed arrangement of beads at one end of the tray. It’s not that a improbable arrangement has, against all odds, somehow defied “the laws of statistics”, or, indeed, the 2nd Law of thermodynamics, and occurred, but that the generative conditions under which the most probable arrangement used to be “uniform distribution” have changed, and the most probable arrangement now is “close-packed at one end”.

And so, to answer Granvilles question as to how the influx of energy from elsewhere can make an improbable thing probable is simply that it can do just that: make the more probable thing something other than “uniformly diffused”.

How such an influx of energy can do this depends on depending on what is being “undiffused” by what, where. But not only does it happen all the time, in both biological and non-biological systems, but it does not violate the 2nd Law because the 2nd Law specifically allows for local decrease in a system entropy (e.g. more closely packed beads on a tray) to occur if something from outside does work on the system. And so if we see an “undiffused” arrangement emerging spontaneously from a “diffused” arrangement, as for example is proposed by OoL theories (not that we have a really good one yet), we need not conclude that the 2nd Law must have been violated for such a thing to happen – what we need to do is figure out what, for example, from outside a system of molecules diffusing around in an archaeosoup, might have undiffused them. And a number of proposals involve thermal gradients, or cyclical convection currents, which are not forbidden by the 2nd Law, even though the creation of a thermal gradient where there was previously no gradient does require work to be done on the system, just as work was required to tip the tray of beads, and produce a gravity gradient. Given a gradient, “undiffusion” is possible, and gradients are not forbidden by the 2nd Law.

And, indeed, incoming solar energy, acting nonuniformly on the terrestrial surface, regularly provides such a gradient. And all that is required for the energy to act non-uniformly is for the earth to turn, which it does! But it doesn’t even need to be the sun, and at least some OoL theories posit that hydrothermal vents provided the necessarily gradients.

And hydrothermal vents do not violate the 2nd Law any more than tornadoes do.

(Apologies for the long post – if I were smarter I could probably have posted a shorter one! But I’d appreciate comments, nonetheless, from those who make it to the end :))

178. 178

Cantor:

Multiple choice:

1) evolution violates the 2nd Law

2) evolution does not violate the 2nd Law

3) it’s impossible to say either way

2, in the sense that what is proposed in evolutionary theory is not a mechanism that would involve violation of the 2nd Law.

On the other hand, the ID proposal does propose, not a violation, but the intervention of an agent to whom it does not apply – something that can do work i.e move stuff around – molecules, ions, whether in cells, brains or primordial soups – but which does not experience an increase in entropy.

Or does anyone suggest that minds, whether ours that of a designing deity, do increase in entropy after they have done their designing?

179. 179
Thomas2 says:

ks (at 153) –

I am a novice here, but I have an interest in whether or not mindless evolution does in fact violate the 2nd Law or not, so I’d be grateful for your view on whether the following is on the right track:

For your apparatus to work, the solar cell will have to power a heat pump, and the operational efficiency of the heat pump will provide the necessary compensation.

For undirected (blind, mindless) evolution to work without violating the 2nd Law, natural selective processes (successful competition between organisms with differential functionality-selectivity-fecundity, simplifying and ignoring luck) will presumably provide the equivalent role of the heat pump.

Entropy (or X-entropy) will, however, be quantified by an appropriate measure of complexity, not functionality-selectivity-fecundity (since what we are concerned with here is the unplanned/unintelligent/mindless development of “organised”, or “specified”, complexity).

Thus, in order to demonstrate that undirected evolution works without violating the 2nd Law and that natural selective processes can indeed supply the role of a heat pump, it needs to be demonstrated empirically that there exists (on average) a significant positive correlation between appropriately quantified increases in functionality-selectivity-fecundity and appropriately quantified increases in complexity.

Is there any empirical evidence which would reliably suggest this?

180. 180
cantor says:

OK, now let’s reword the question

1) the unguided purposeless action of the four known physical laws cannot be the sole explanation for how this planet was transformed from barren and lifeless to what we see today, because that would violate the 2nd law

2) the unguided purposeless action of the four known physical laws is the sole explanation for how this planet was transformed from barren and lifeless to what we see today, and that does not violate the 2nd law

3) it is impossible to make a definitive quantitative argument either way

181. 181
Joe says:

And still no evidence that unguided evolution is up to the task.

Strange that when all it would take is actual evidence to refute Granville, evolutionists cannot offer any.

And that alone is evry telling…

182. 182
kairosfocus says:

FYI-FTR on the flawed compensation argument.

183. 183

Fourth option, Cantor:

4) Nothing in the theory of evolution violate the 2nd Law of thermodynamics.

Granville is incorrect when he claims that if it were true, the 2nd Law would be violated.

184. 184
Joe says:

4) Nothing in the theory of evolution violate the 2nd Law of thermodynamics.

What theory of evolution? Can you provide a link or reference, please? Maybe some testable hypotheses?

185. 185
cantor says:

Fourth option, Cantor:

4) Nothing in the theory of evolution violate the 2nd Law of thermodynamics.

That’s the answer to a completely different question.

Granville is incorrect when he claims that if it were true, the 2nd Law would be violated.

That’s a red herring. Please answer the question on its own terms.

186. 186
CS3 says:

keithS:

First, let’s be fair and distinguish what Sewell may personally believe, and what is actually in the paper. As has been mentioned several times, in the paper, he is very careful not to claim that evolution has violated the Second Law. All he says is that the methodology Asimov, Styer, Bunn, and many others have used to show that it has not been violated is flawed. He allows there may be other ways to show that it has not been violated. You can’t just say it is fine to make any argument, no matter how flawed, against a misconception. If the argument really is flawed, then either there must be another argument that is not flawed, or else perhaps the misconception is not actually a misconception.

So, while I know you agree with their conclusion that it has not been violated, do you agree with their methodology to reach that conclusion? If you do agree with their methodology, then don’t attack Sewell for making the same assumptions as the people with whom you agree. Either dispute the argument that is specific to him (that, if something is extremely improbable when a system is isolated, it is still extremely improbable when a system is open, unless something enters or leaves that makes it not extremely improbable), or just leave it alone. If you don’t agree with their methodology, then I would honestly encourage you to write a paper pointing out their mistakes and providing your explanation for why there is no conflict (for example, saying that the Second Law is only about energy, and there has been no overall decrease in thermal entropy in the universe). I am genuinely curious as to whether the countless scientists who have made the arguments that Asimov, Styer, and Bunn have made would readily concede that they were confused and you are right.

Secondly, your argument that there must be no conflict with the neo-Darwinian mechanism because laws apply to all physical processes only makes sense if you assume a priori with 100% certainty that neo-Darwinism is an accurate description of the process. If I were to propose a detailed mechanism for how, say, photosynthesis works, and someone were to later find that, according to my proposed mechanism, there would be a net energy increase in the universe, that would conflict with the First Law, and thus cast serious doubt on my mechanism as an accurate description of the physical process of photosynthesis.

Elizabeth Liddle:

Certainly we are in agreement that natural forces can do things other than distribute particles uniformly. Sewell must agree with that also; otherwise, he would be able to definitively claim that the Second Law has been violated, because obviously the Earth is not in a state of uniform distribution. Let me address this in the context of the paper and of the more general argument.

In the paper, the statement that “if something is extremely improbable when a system is isolated, it is still extremely improbable when the system is open, unless something enters or leaves that makes it not extremely improbable” makes no claims about what is or is not improbable. As I mentioned in comment 84, not everyone makes the compensation argument. Some, like you, believe that human brains, computers, and encyclopedias are not an improbable result of natural forces acting on a sunlit barren planet, and thus your position is not threatened by the paper itself.

To be clear, though, that is definitely not the position Asimov, Styer, Bunn, and Lloyd were making. If they did not think anything improbable was happening, then there would be no need for them to convert the probabilities of improbable events into entropies and compare that to a different type of entropy to satisfy an inequality. And, even if the energy were causing these events, it makes no sense for them to try to convert from the original improbability of what happened to how much energy is needed. It takes energy to flip coins, but it takes no more energy to flip all heads than to flip half heads and half tails.

Outside of the paper, such as on this message board, I assume it is safe to say that Sewell, and I, do disagree with your position that human brains, computers, and encyclopedias are not an improbable result of natural forces acting on a sunlit barren planet. However, I certainly admit I can’t prove it. Obviously, the four unintelligent natural forces can create things with order (or appearances of order) and not just uniform distributions, as would be the case if diffusion alone were operative. However, while of course not always trivial, it is generally not too hard to imagine the types of order we see, such as snowflakes, that are produced indisputably by only natural forces (excluding life and human inventions) as products of just gravitation, electromagnetism, and the strong and weak nuclear forces. It is much more difficult to imagine spaceships and iPhones as products of those forces. But again, that is admittedly an intuition, not a proof. Just as my skepticism that a tornado could turn rubble into a house is based on intuition.

As I quoted previously, my University Physics book has a section entitled “Building physical intuition” about the Second Law with examples such as this (e.g., card shuffling). I think it is fair to say that the Second Law says that the direction of spontaneous change is from an arrangement of lesser probability to an arrangement of greater probability. The only problem is that, outside of thermal entropy, which is easily quantifiable, allowing one to compute with certainty which arrangement is more probable, in other cases, we usually can only use intuition to guess which arrangement has greater probability. And intuition can sometimes be wrong, but that doesn’t mean it isn’t sometimes pretty obvious.

To use my magnetized coin flipping example again, I can believe that natural forces could cause all heads to come up, even though that would be an extremely improbable macrostate (with only one microstate) based on random chance alone, because that is consistent with the type of “order” I could expect based on my knowledge and experience with the natural forces (and in fact, would happen if I placed a magnet under the table). However, I could not believe that natural forces could cause the coins to come up as a bit-wise representation of a great novel (without, say, an intelligent human setting initial conditions to assure that happens, for example, by placing small magnets selectively under certain coins), even though that macrostate is no more improbable (assuming only chance) than the all heads macrostate (in fact there are more microstates in this case), because that is not a type of “order” consistent with my understanding or experience of what the natural forces can do. Could they land such that every 2i+5 is heads? I would think probably not, but maybe there is a way that could happen that I just haven’t thought of. It is not always easy to say exactly what the four unintelligent natural forces can and can’t do, but that doesn’t mean, in my opinion, that there aren’t some cases that are obvious. (Disclaimer: obviously all of these statements are made assuming a reasonable limit on the total number of flips.)
That said, I agree that, apart from the refutation of the compensation argument (which was the content of the paper itself), these other arguments are essentially equivalent to the CSI arguments made, with more rigor, by Dembski and others. Thus, I would not expect anyone unconvinced by those arguments to be convinced by Sewell’s arguments in that regard.

187. 187

CS3, first of all:

First, let’s be fair and distinguish what Sewell may personally believe, and what is actually in the paper. As has been mentioned several times, in the paper, he is very careful not to claim that evolution has violated the Second Law.

From Granville Sewell’s Mathematical Intelligencer paper:

[The other point] is that to attribute the development of life on Earth to natural selection is to assign to it–and to it alone, of all known natural “forces”–the ability to violate the second law of thermodynamics and to cause order to arise from disorder.

Clearly, Granville is stating here that “to attribute the development of life on Earth to natural selection” is to propose a mechanism that has “the ability to violate the second law of thermodynamics”.

So, while it is true that he is not saying that evolution has biolated the 2nd law he is clearly saying that if it happened it would have violated it!s

(If so, equally, a Designer would have violated it, but that is different problem).

And it is precisely this that keiths and I contend is incorrect.

Secondly: thanks for your specific comments on my post. It does indeed seem that we agree on much. Personally, what I am concerned with is whether Granville’s argument makes sense, not whether others have correctly diagnosed what is wrong with it. That said, while I have not read in detail some of the alleged refutations, it is perfectly true, as far as I can see, that if there is a local entropy decrease in a system work must have been done on that system from a system outside that system, and when a system does work, entropy increases. If work is done by a hot system passing heat to a cooler system, the entropy in the hotter system will increase at most as much, and usually more, than the entropy in the cooler system decreases. On earth we have a hot system nearby (the sun) and so it is not surprising that work can be done on local systems on our cooler earth, resulting in local entropy decreases. But even if we did not have the sun, and the only hot thing was the molten core of our planet, it can still cause temperature gradients that can move things, and that movement can result in a local decrease of entropy.

Here is where I think you, and Granville, go wrong:

I think it is fair to say that the Second Law says that the direction of spontaneous change is from an arrangement of lesser probability to an arrangement of greater probability.

In this form it would be simple tautology: probable arrangements are more probable than improbable arrangements.

We don’t need a 2nd Law to tell us that! That is why I gave my example above of two identical series of Hs and Ts. You simply cannot examine the pair and say which one was “probable” and which was not. You need a crucial additional piece of information, which is the process that generated them. In fact, the first was generated by a process that gave that sequence a 1/10^2 probability of occurring, while the other was generated by a process that gave it a near unity probability of occurring.

This is the problem that lies at the heart not only of Granville’s argument, but also of Dembskian ID: we simply cannot look at a pattern and say whether it was probable or improbable. We have to say: given generative process X, this pattern was (or was not) probable.

A patch of sun-warmed earth can vastly increase the probability that the air molecules above it will stop moving in equiprobable directions, but start to move systematically in an ascending spiral, creating a column of low entropy air capable of piling a diffuse layer of leaves into a close-packed pile.

Such an arrangement of air and leaves is vastly improbable in the absence of infra-red solar heat warming a patch of earth. But in the presence of that infra-red heat, the dust devil becomes not only possible, but highly probable. If you sat watching the scene under the right conditions, you’d be surprised not to see one.

In other words, whenever someone mention the “probability” of an arrangement, they need to specifiy the conditions under which the arrangement would be “probable”. Under isolated conditions, the most probable arrangement for a system is uniform diffustion. But if work can be done on the system from outside the system, a very different, and sometimes highly complex, arrangement can become much more probable than uniform diffusion.

And this is precisely why the 2nd Law allows for local entropy increases if some neigbouring system is able to do work – but that neighbouring system, in doing the work, will suffer an entropy decrease. Which is fine.

188. 188
keiths says:

Lizzie,

So, while it is true that he is not saying that evolution has biolated the 2nd law…

Another fortuitous typo!

189. 189
cantor says:

cantor@181:
That’s the answer to a completely different question.

.

attn Liddle:
I will give you a 4th option:

4) I do not know if it is possible to make a definitive quantitative argument either way

190. 190
keiths says:

CS3,

I’ve mentioned this a couple of times already but people (including you) haven’t picked up on it, so let me try again.

When Granville argues against the compensation idea, he is unwittingly arguing against the second law itself.

It’s easy to see why. Imagine two open systems A and B that interact only with each other. Now draw a boundary around just A and B and label the contents as system C.

Because A and B interact only with each other, and not with anything outside of C, we know that C is an isolated system (by definition). The second law tells us that the entropy cannot decrease in any isolated system (including C). We also know from thermodynamics that the entropy of C is equal to the entropy of A plus the entropy of B.

All of us (including Granville) know that it’s possible for entropy to decrease locally, as when a puddle of water freezes. So imagine that system A is a container of water that becomes ice.

Note:

1. The entropy of A decreases when the water freezes.

2. The second law tells us that the entropy of C cannot decrease.

3. Thermodynamics tells us that the entropy of C is equal to the entropy of A plus the entropy of B.

4. Therefore, if the entropy of A decreases, the entropy of B must increase by at least an equal amount to avoid violating the second law.

The second law demands that compensation must happen. If you deny compensation, you deny the second law.

Thus Granville’s paper is not only chock full of errors, it actually shoots itself in the foot by contradicting the second law!

It’s a monumental mess that belongs nowhere near the pages of any respectable scientific publication. The BI organizers really screwed up when they accepted Granville’s paper.

191. 191

Cantor: I think it’s perfectly possible to tell whether a proposed explanation would violate the 2nd Law or not, and the evolutionary explanation doesn’t.

If it did, no-one would take it seriously.

The Interventionist Designer hypothesis, however, does require suspension of the 2nd Law.

Which may be why a lot of people don’t take it seriously.

192. 192
Joe says:

The Interventionist Designer hypothesis, however, does require suspension of the 2nd Law.

That you think so is why no one takes you seriously- no one from the ID side anyway.

193. 193
Joe says:

1. The entropy of A decreases when the water freezes.

Your position cannot explain water’s existence, let alone its properties.

194. 194
kairosfocus says:

F/N: Predictably, the objectors have continued with talking points as usual, so I again draw attention to the FYI here and to Sewell here.

In the latter, I would draw attention to this clip:

Note that (2) [a flow gradient expression] simply says that heat ?ows from hot to cold regions—because the laws of probability favor a more uniform distribution of heat energy . . . . From [an eqn that entails that in such a system, d’S >/= 0] (5) it follows that in an isolated, closed, system, where there is no heat ?ux through the boundary d’S >/= 0. Hence, in a closed system, entropy can never decrease. Since thermal entropy measures randomness (disorder) in the distribution of heat, its opposite (negative) can be referred to as ”thermal order”, and we can say that the thermal order can never increase in a closed system.

Furthermore, there is really nothing special about ”thermal” entropy. We can define another entropy, and another order, in exactly the same way, to measure randomness in the distribution of any other substance that diffuses, for example, we can let U(x,y,z,t) represent the concentration of carbon diffusing in a solid (Q is just U now), and through an identical analysis show that the ”carbon order” thus defined cannot increase in a closed system. It is a well-known prediction of the second law that, in a closed system, every type of order is unstable and must eventually decrease, as everything tends toward more probable states . . .

Once the diffusion concept is introduced and made central to GS’s argument, we are dealing with an entropy pattern and a classic case used in explaining the spontaneous tendency to disorder in systems and the rise of entropy as systems move to less constrained conditions, as the already linked FYI-FTR points out in 2(f) 15 – 16, and then further builds up through Yavorsky and Pinsky in point 4:

4] Yavorski and Pinski, in the textbook Physics, Vol I [MIR, USSR, 1974, pp. 279 ff.], summarise the key implication of the macro-state and micro-state view well: as we consider a simple model of diffusion, let us think of ten white and ten black balls in two rows in a container. There is of course but one way in which there are ten whites in the top row; the balls of any one colour being for our purposes identical. But on shuffling, there are 63,504 ways to arrange five each of black and white balls in the two rows, and 6-4 distributions may occur in two ways, each with 44,100 alternatives. [–> BTW, resemblance to the 500 coin example that objectors made such heavy weather of, is not coincidental, they were showing their unfamiliarity with or lack of understanding of the statistical thinking connected to thermodynamics . . . ] So, if we for the moment see the set of balls as circulating among the various different possible arrangements at random, and spending about the same time in each possible state on average, the time the system spends in any given state will be proportionate to the relative number of ways that state may be achieved. Immediately, we see that the system will gravitate towards the cluster of more evenly distributed states. In short, we have just seen that there is a natural trend of change at random, towards the more thermodynamically probable macrostates, i.e the ones with higher statistical weights. So “[b]y comparing the [thermodynamic] probabilities of two states of a thermodynamic system, we can establish at once the direction of the process that is [spontaneously] feasible in the given system. It will correspond to a transition from a less probable to a more probable state.” [p. 284.] This is in effect the statistical form of the 2nd law of thermodynamics. Thus, too, the behaviour of the Clausius isolated system above is readily understood: importing d’Q of random molecular energy so far increases the number of ways energy can be distributed at micro-scale in B, that the resulting rise in B’s entropy swamps the fall in A’s entropy. Moreover, given that FSCI-rich micro-arrangements are relatively rare in the set of possible arrangements, we can also see why it is hard to account for the origin of such states by spontaneous processes in the scope of the observable universe. (Of course, since it is as a rule very inconvenient to work in terms of statistical weights of macrostates [i.e W], we instead move to entropy, through s = k ln W. Part of how this is done can be seen by imagining a system in which there are W ways accessible, and imagining a partition into parts 1 and 2. W = W1*W2, as for each arrangement in 1 all accessible arrangements in 2 are possible and vice versa, but it is far more convenient to have an additive measure, i.e we need to go to logs. The constant of proportionality, k, is the famous Boltzmann constant and is in effect the universal gas constant, R, on a per molecule basis, i.e we divide R by the Avogadro Number, NA, to get: k = R/NA. The two approaches to entropy, by Clausius, and Boltzmann, of course, correspond. In real-world systems of any significant scale, the relative statistical weights are usually so disproportionate, that the classical observation that entropy naturally tends to increase, is readily apparent.)

Now, what happens if one opens up a system and allows in energy?

Obviously, if it is not carefully coupled to systems that use a flow-through of mass and energy to do work, there is more energy available to be dispersed among the micro level components, i.e. the system tends to get more disordered, by overwhelming probability. To instead get the energy added to create order, we need something that imposed that order, sot heat the energy does not just go to disorder.

Work, of course is forced, ordered motion.

Hence the point of the too often dismissed Hoyle comment about tornadoes in junkyards. The tornado has lots of energy indeed, but no orderly pattern relvant to making a jumbo jet. So it will by overwhelming probability simply scatter parts hither and yon, and it would be astonishing beyond reasonable expectaitons that instead it would assemble a jumbo jet form the parts.

But take those same parts and put men to work with machines according to an intelligent, purposeful plan, and the parts will end up as a jumbo jet.

Energy and mass inflow and/or outflow are not enough to explain complex functionally specific organisation and associated information. And the plan and machinery that put the organisation into effect form components may in turn be the same sort of FSCO/I rich system that is organised and in turn requires explanation.

Indeed, it is not just Hoyle who makes a comment like this, here is Robert Shapiro in his 2006 Sci Am paper on OOL (cf cite here):

To rescue the RNA-first concept from [its] otherwise lethal defect, its advocates have created a discipline called prebiotic synthesis. They have attempted to show that RNA and its components can be prepared in their laboratories in a sequence of carefully controlled reactions, normally carried out in water at temperatures observed on Earth . . . . Unfortunately, neither chemists nor laboratories were present on the early Earth to produce RNA . . . .

The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck.

Orgel’s reply was in some respects almost as on point:

Why should one believe that an ensemble of minerals that are capable of catalyzing each of the many steps of [for instance] the reverse citric acid cycle was present anywhere on the primitive Earth [8], or that the cycle mysteriously organized itself topographically on a metal sulfide surface [6]? The lack of a supporting background in chemistry is even more evident in proposals that metabolic cycles can evolve to “life-like” complexity. The most serious challenge to proponents of metabolic cycle theories—the problems presented by the lack of specificity of most nonenzymatic catalysts—has, in general, not been appreciated. If it has, it has been ignored. Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own . . . .

The prebiotic syntheses that have been investigated experimentally almost always lead to the formation of complex mixtures. Proposed polymer replication schemes are unlikely to succeed except with reasonably pure input monomers. No solution of the origin-of-life problem will be possible until the gap between the two kinds of chemistry is closed. Simplification of product mixtures through the self-organization of organic reaction sequences, whether cyclic or not, would help enormously, as would the discovery of very simple replicating polymers. However, solutions offered by supporters of geneticist or metabolist scenarios that are dependent on “if pigs could fly” hypothetical chemistry are unlikely to help.

Now, onlookers, look back at the thread above.

Do you see any serious answer to the source of complex, functionally specific information coming from the objectors to design theory?

Or, do you see instead if pigs could fly speculation whereby once mass and energy flow-throughs account for heat, we can disregard the disorganising effects that naturally utterly dominate?

That is the real issue and it is being ducked, dodged and diverted from.

KF

195. 195
cantor says:

Liddle@187
Cantor: I think it’s perfectly possible to tell whether a proposed explanation would violate the 2nd Law or not, and the evolutionary explanation doesn’t.

196. 196
cantor says:

I will re-post the question for your convenience:

Multiple Choice. Please select the 1 of the 4 statements below that most closely represents your views:

1) the unguided purposeless action of the four known physical laws cannot be the sole explanation for how this planet was transformed from barren and lifeless to what we see today, because that would violate the 2nd law

2) the unguided purposeless action of the four known physical laws is the sole explanation for how this planet was transformed from barren and lifeless to what we see today, and that does not violate the 2nd law

3) it is impossible to make a definitive quantitative argument either way

4) I do not know if it is possible to make a definitive quantitative argument either way

197. 197
keiths says:

cantor,

You’re oddly insistent. Lizzie lives in the UK and is probably asleep right now.

Instead of giving multiple-choice tests, why not just state your position, make an argument for it, and see how Lizzie responds? Why confine her to your predetermined set of options?

If I were answering your question, I would say that we don’t know for sure that the four fundamental forces are a complete list, but that I don’t see any good reason to invoke teleology as part of the explanation for life’s appearance on Earth.

198. 198
cantor says:

If I were answering your question, I would say that we don’t know for sure that the four fundamental forces are a complete list

That would be option 3. If you don’t know for sure that the four fundamental forces are a complete list, then you can’t make a definitive argument that starts by assuming that they are.

I don’t see any good reason to invoke teleology as part of the explanation for life’s appearance on Earth.

Teleology was not one of the options.

199. 199
cantor says:

You’re oddly insistent.

So are you. And Lizzie is oddly evasive.

200. 200
keiths says:

KF,

You’re on record claiming that the compensation argument is flawed.

Are you aware that to deny the compensation argument is to deny the second law itself?

How do you justify your position?

201. 201
keiths says:

cantor,

If you don’t know for sure that the four fundamental forces are a complete list, then you can’t make a definitive argument that starts by assuming that they are.

I don’t need to assume that they are.

First, nothing is certain in science (or in life — see my comments to DonaldM on this thread).

Second, when I argue that evolution doesn’t violate the second law, my argument doesn’t depend on knowing how many fundamental forces there are or whether they are “blind”. Why would it?

The second law was formulated long before we knew about the strong and weak nuclear forces, so knowledge of them isn’t necessary to make second law arguments, obviously.

To argue that knitting doesn’t violate the second law, I don’t need to invoke the four fundamental forces or determine that they are “blind”. Why should that be necessary in the case of evolution?

202. 202
cantor says:

nothing is certain in science

That would be option 3.

when I argue that evolution doesn’t violate the second law…

203. 203
cantor says:

I don’t need to assume that they are.

You do if you want to choose option 2.

But you chose option 3. So it’s all good.

204. 204
keiths says:

cantor,

That would be option 3.

[“it is impossible to make a definitive quantitative argument either way”]

Only if by ‘definitive’ you mean ‘absolutely certain’.

But that would be silly, because then there would be no “definitive quantitative arguments” in science at all.

How about trying to address my actual position, rather than the inane one you’d like to shoehorn me into?

205. 205
cantor says:

How about trying to address my actual position, rather than the inane one you’d like to shoehorn me into?

Down boy.

I am interested in the question I asked. If you are not, then don’t answer it. Move along. These aren’t the droids you’re looking for.

206. 206
cantor says:

Because your “actual position” is irrelevant to the question I asked. I am pursuing a line of reasoning and am not interested in your strawman distractions.

207. 207
cantor says:

CS3@182:
It is much more difficult to imagine spaceships and iPhones as products of those forces.

UNLESS, of course, all the information necessary for such an outcome was already encoded somehow in an appropriate form in the early barren planet. But that, of course, does not solve the problem; it just pushes the problem backward to an earlier time.

208. 208
keiths says:

cantor,

I can see that you have a script in mind that you’d like me (or Lizzie) to play out with you (as demonstrated, for example, by your eagerness to foist “option 3” on me). Reminds me of Upright Biped.

209. 209
cantor says:

Because you have your preferred style (posting long rambling incoherent self-contradicting irrelevant random thoughts) and I have mine: I ask questions. Please just skip over my posts if you don’t want to engage me on my terms.

210. 210
cantor says:

I can see that you have a script in mind

The only “script” I have in mind is to tenaciously refuse to be derailed by your insistence on changing the subject to “evolution”.

211. 211
keiths says:

cantor,

Please just skip over my posts if you don’t want to engage me on my terms.

OK.

The only “script” I have in mind is to tenaciously refuse to be derailed by your insistence on changing the subject to “evolution”.

Yes. God forbid (so to speak) that someone talk about entropy, evolution and open systems when discussing a paper entitled Entropy, Evolution and Open Systems.

Good thing we have you keeping us on the rails, cantor.

212. 212
kairosfocus says:

KS:

With all due respects, this is utter nonsense:

Are you aware that to deny the compensation argument is to deny the second law itself?

You have just proved beyond all doubt that your ideology is leading you into thermodynamic absurdities by failing to distinguish between the implications of diffusion and related phenomena in light of relevant molecular circumstances etc, and what happens with purposeful constructive work. (Which is in fact as commonplace as what is needed to construct posts in this thread.)

FYI, a construction site, or an assembly line — e.g. for a Jumbo Jet — are not violations of the second law. They show instead that intelligence with skill, purpose, plans, equipment and resources as necessary is able to carry out energy conversion based constructive work and by virtue of such, to create organised entities that would by overwhelming improbability similar to that of a 500H string of coins happening by tossing, not SPONTANEOUSLY appear by diffusion or some stand in for it. But of course, you and your ilk were in deep denial over the 500H coin toss exercise also.

(If you still do not understand the relevance of diffusion as a contrast to construction [the point Sewell has been emphasising . . . ], kindly cf the thought exercise here on in my longstanding always linked note. Notice, how in the thought exercise in accord wit the laws of physics and an imagined circumstance, the parts for a flyable micro-jet are diffused in a large vat and then on the control, the issue is, would diffusion spontaneously assemble a jet or something else? [By overwhelming improbability, not credibly; on the gamut of solar system or observed cosmos.] Then, note the use of clumping nanobots, and the work of clumping, which undoes much of diffusion. Then note the further work of correctly arranging, which then shows the final construction. In these cases, observe that informationally directed work is able to construct a desired and planned entity. In so doing obviously much shaft work is done, which as it is performed by physical entity uses energy conversion devices and hence there is no violation of the 2nd law, as there is in the course of that work the exhaustion of waste energy that becomes heat. This is quite similar to was it Szilard’s answer to the Maxwell Demon dilemma, that the effort to acquire and apply information makes the difference. And indeed, the thought exercise is obviously based on both an update to Maxwell and to Hoyle by miniaturising Hoyle. Nor, does the matter of the formation of a hurricane or a similar entity by forces of order [cf. here], answer to the problem; the difference between chaos caused by random diffusive forces, order caused by mechanical necessity and organisation in accordance with a Wicken type wiring diagram, is obvious, cf. here.)

For that matter, this same difference between diffusive or dissipative, probability-driven processes and constructive work, holds for the production of posts in this thread, which also exhibit FSCO/I.

Your quarrel at this point is not with me or with Dr Sewell or with Nobel equivalent prize-holder the late sir Fred Hoyle (who talked about Jumbo jets being made by tornadoes to illustrate his point).

Nope, it is with J S Wicken, Robert Shapiro and Leslie Orgel, who at least acknowledged that something was missing in the account on OOL, as can be seen at 190 above and in the case of Wicken as I just linked.

You are in the position of saying that constructive, functionally specific work, appearing spontaneously, has constructed life from chemicals in a stew in a pond or the like, and has onwards constructed the rest of the world of life. Without first pausing to show by concrete observed example that the production of constructive work by forces of diffusion or the like, is an empirically observed fact, leading to FSCO/I.

That is, you are improperly appealing to the implied magical powers of lucky noise to do something like create, at the border of Wales, next to a rail road, by a rock avalanche, a sign saying “Welcome to Wales.”

Sewell’s fundamental point is right, obviously right:

. . . The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid [–> i.e. diffuses] is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur.

KF

213. 213
kairosfocus says:

F/N: This by Brillouin, is also instructive on the fundamental issues at stake:
__________

>> How is it possible to formulate a scientific theory of information? The first requirement is to start from a precise definition . . . .

We consider a problem involving a certain number of possible answers, if we have no special information on the actual situation. When we happen to be in possession of some information on the problem, the number of possible answers is reduced, and complete information may even leave us with only one possible answer. Information is a function of the ratio of the number of possible answers before and after, and we choose a logarithmic law in order to insure additivity of the information contained in independent situations [cf. basic outline here on in context from Section A my note.] . . . .

Physics enters the picture when we discover a remarkable likeness between information and entropy. This similarity was noticed long ago by L. Szilard, in an old paper of 1929, which was the forerunner of the present theory. In this paper, Szilard was really pioneering in the unknown territory which we are now exploring in all directions. He investigated the problem of Maxwell’s demon, and this is one of the important subjects discussed in this book. The connection between information and entropy was rediscovered by C. Shannon in a different class of problems, and we devote many chapters to this comparison. We prove that information must be considered as a negative term in the entropy of a system; in short, information is negentropy. The entropy of a physical system has often been described as a measure of randomness in the structure of the system. We can now state this result in a slightly different way:

Every physical system is incompletely defined. We only know the values of some macroscopic variables, and we are unable to specify the exact positions and velocities of all the molecules contained in a system. We have only scanty, partial information on the system, and most of the information on the detailed structure is missing. Entropy measures the lack of information; it gives us the total amount of missing information on the ultramicroscopic structure of the system.

This point of view is defined as the negentropy principle of information [added links: cf. explanation in Section A my note here], and it leads directly to a generalization of the second principle of thermodynamics, since entropy and information must, be discussed together and cannot be treated separately. This negentropy principle of information will be justified by a variety of examples ranging from theoretical physics to everyday life. The essential point is to show that any observation or experiment made on a physical system automatically results in an increase of the entropy of the laboratory. It is then possible to compare the loss of negentropy (increase of entropy) with the amount of information obtained. The efficiency of an experiment can be defined as the ratio of information obtained to the associated increase in entropy. This efficiency is always smaller than unity, according to the generalized Carnot principle. Examples show that the efficiency can be nearly unity in some special examples, but may also be extremely low in other cases.

This line of discussion is very useful in a comparison of fundamental experiments used in science, more particularly in physics. It leads to a new investigation of the efficiency of different methods of observation, as well as their accuracy and reliability . . . . [Science and Information Theory, Second Edition, 1962. Dover Reprint.] >>
_________

The onlooker is again invited to cf the FYI-FTR here.

KF

214. 214
keiths says:

KF,

You’re bluffing again.

You dedicated an entire OP to challenging the compensation argument.

I have shown that to deny the compensation argument is to deny the second law itself. The demonstration is simple and obviously correct. You (like Granville) have made a pitiful mistake and have shot yourself in the foot.

To salvage your position, you will have to show that my analysis is flawed.

Good luck.

215. 215
kairosfocus says:

F/N 2: Let me also clip the cited remark in my note on the info-entropy link that was nicely summarised by — surprise — Wiki:

At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann’s constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing.

But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, “Gain in entropy always means loss of information, and nothing more” . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).

Harry S Robertson, in Statistical Thermophysics, pp. vii – viii, has aptly brought out the work-related significance of that loss of information:

. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .

KF

216. 216
kairosfocus says:

KS: It is you who are bluffing and have been called.

I freely say that you cannot back up your bluff.

Show us your verified observation of a significant quantum of constructive work producing FSCO/I rich entities solely through spontaneous action of diffusion and other dissipative forces. Or even the degree of order reflected in a tray of coins in the 500H state, or the O2 molecules in a room all spontaneously rushing to one end.

There are cases of order that do emerge by mechanical necessity and relevant forces in a situation such as snowflakes, or hurricanes, but this has nothing to do with functional, specific, complex organisation.

Or, just tell us the case where it has been demonstrated empirically that by the ordinary physical and chemical forces in a warm little pond or the like, a living cell based on homochiral C chemistry in aqueous medium, with encapsulation and intelligent gating, metabolic sub systems and with a von Neumann code based self replication system, has originated by blind chance and mechanical necessity.

That is the missing root of your tree of life, and no root, no shoots no tree.

Or, do you have the equivalent of spontaneous formation of a Welcome to Wales sign at the border of Wales, through a rock avalanche?

Show us, no more drumbeat dismissive talking points.

I predict the result: you don’t have any such, but your ideology demands that the equivalent and more, happened in some warm little pond or the like, leading to a world of life that by blind chance variation and loss of unlucky or less favoured varieties, managed to form dozens of body plans requiring increments of FSCO/I of order 10 – 100 mn bases.

Remember, the unanswered UD pro-darwinist essay challenge stands at over nine months without a serious answer on empirically grounded evidence addressing the full tree of life from the root up.

KF

217. 217
keiths says:

KF,

No rebuttal to my simple little argument?

218. 218
kairosfocus says:

BTW: Your claimed counter example is by way of failing to address the Clausius example fully, as has been laid out in the longstanding note and the recent FYI-FTR. What you are ignoring is the force of the point of the relvant statistics that drives the 2nd law: when A transfers d’Q to B, at lower temp, the loss of possible ways to arrange mass and energy at micro levels for A is far less than the rise of same in B, which is how the 2nd law was defined. And as usual, you are ignoring the result that importation of raw energy INCREASES entropy, as I took time to discuss in detail in the FYI-FTR, excerpting a longstanding note. What happens in a different case is that if B has an energy converter that imported energy is coupled to, it may perform constructive shaft work, and will need to exhaust energy to C at a yet lower temp. But by slicing up the story and setting up strawmen you can pretend that you have answered the problem. You have not. What is needed of explanation is organisation tracing to constructive work and dissipative forces like diffusion, by overwhelming probability, are simply not capable of explaining such. but then, you and your ilk have failed to cogently address something so simple as a tray of coins in the 500 H state, so that is no surprise. You believe in the magical powers of lucky noise. Okay, if you want to do so, show us a case like the rock avalanche spontaneously producing “Welcome to Wales” at the Welsh border. KF

219. 219
kairosfocus says:

KS: Your argument was anticipated [actually from Clausius on], and that you presented it as though it is a refutation, only shows that you are pushing strawman tactic talking points. What is to be explained and justified empirically, is the alleged production of constructive organising work of complex entities through diffusion or the like, not say the freezing of an ice cube. Onlookers, cf. FYI-FTR, with a focus on the case of the heat transfer from A to B and onwards. KF

220. 220
keiths says:

KF,

You claim my argument is wrong. If that’s true, then you should be able to identify the precise step or steps that are mistaken and explain why they are mistaken.

The steps are conveniently numbered in my argument, which is reproduced below for your convenience. Please refer to those numbers in your attempted rebuttal.

Or you can try to bluff your way out of your predicament.

Either way, the onlookers are watching. Good luck.

CS3,

I’ve mentioned this a couple of times already but people (including you) haven’t picked up on it, so let me try again.

When Granville argues against the compensation idea, he is unwittingly arguing against the second law itself.

It’s easy to see why. Imagine two open systems A and B that interact only with each other. Now draw a boundary around just A and B and label the contents as system C.

Because A and B interact only with each other, and not with anything outside of C, we know that C is an isolated system (by definition). The second law tells us that the entropy cannot decrease in any isolated system (including C). We also know from thermodynamics that the entropy of C is equal to the entropy of A plus the entropy of B.

All of us (including Granville) know that it’s possible for entropy to decrease locally, as when a puddle of water freezes. So imagine that system A is a container of water that becomes ice.

Note:

1. The entropy of A decreases when the water freezes.

2. The second law tells us that the entropy of C cannot decrease.

3. Thermodynamics tells us that the entropy of C is equal to the entropy of A plus the entropy of B.

4. Therefore, if the entropy of A decreases, the entropy of B must increase by at least an equal amount to avoid violating the second law.

The second law demands that compensation must happen. If you deny compensation, you deny the second law.

Thus Granville’s paper is not only chock full of errors, it actually shoots itself in the foot by contradicting the second law!

It’s a monumental mess that belongs nowhere near the pages of any respectable scientific publication. The BI organizers really screwed up when they accepted Granville’s paper.

221. 221
CS3 says:

1. The entropy of A decreases when the water freezes.

2. The second law tells us that the entropy of C cannot decrease.

3. Thermodynamics tells us that the entropy of C is equal to the entropy of A plus the entropy of B.

4. Therefore, if the entropy of A decreases, the entropy of B must increase by at least an equal amount to avoid violating the second law.

Compensation is valid when discussing only thermal entropy. The problem with compensation is when it is applied to two different unrelated entropies. I know your position is that thermal entropy is all that is relevant to the Second Law, so you might well never do such a thing. But see my comment 169, and you will see that Asimov, Styer, and Bunn clearly do just that.

In your example, has something entered or left system A that makes the formation of ice in system A not improbable? Yes, heat. So there is no problem.

Here is an example of the compensation argument involving two unrelated entropies. This is what Sewell has a problem with.

1. Spaceships and the Internet form in System A.

2. The thermal entropy of System B has increased.

3. Thermodynamics tells us that the entropy of C is equal to the entropy of A plus the entropy of B.

4. Somehow we convert the improbability of spaceships and the Internet to units of thermal entropy and make sure it is “compensated for” by the increase of thermal entropy in B.

222. 222
keiths says:

CS3,

Compensation is valid when discussing only thermal entropy.

Thank you for stating that so unambiguously. KF and Granville, are you listening?

The problem with compensation is when it is applied to two different unrelated entropies…

Here is an example of the compensation argument involving two unrelated entropies. This is what Sewell has a problem with.

I know this is hard to believe, CS3, but Granville really does believe that the entire concept of compensation is invalid. KF too.

From the paper:

Of course the whole idea of compensation, whether by distant or nearby events, makes no sense logically: an extremely improbable event is not rendered less improbable simply by the occurrence of “compensating” events elsewhere.

Not only is he wrong to claim that the “whole idea” of compensation is illogical; he also presents a strawman version of the compensation argument, as I pointed out earlier in the thread:

For anyone who still doesn’t get it, here is an explanation of Granville’s biggest error.

The compensation argument says that entropy can decrease in a system as long as there is a sufficiently large net export of entropy from the system.

Granville misinterpets the compensation argument as saying that anything, no matter how improbable, can happen in a system as long as the above criterion is met.

This is obviously wrong, so Granville concludes that the compensation argument is invalid. In reality, only his interpretation of the compensation argument is invalid. The compensation argument itself is perfectly valid.

The compensation argument shows that evolution doesn’t violate the second law. It does not say whether evolution happened; that is a different argument.

Granville confuses the two issues because of his misunderstanding of the compensation argument.

Since the second law isn’t violated, it has no further relevance. Granville is skeptical of evolution, but his skepticism has nothing to do with the second law.

He is just like every other IDer and creationist: an evolution skeptic.

You can see why this is a huge disappointment to him. Imagine if he had actually succeeded in showing that evolution violated a fundamental law of nature!

223. 223

Cantor:

So are you. And Lizzie is oddly evasive.

Cantor: if you present me with a non-exhaustive list of possible positions, and none represents mine, clearly I cannot sign on to one of the positions you have offered.

Let me ask you one question:

1. Does a tornado have less, or more, or the same, entropy, of any sort, than still air?

If your answer depends on the kind of entropy, please say which entropies are greater, the same, or less, in a tornado than in still air.

(I’d be interested in any answers to this question, which may shed light on where we are disagreeing here.)

224. 224

Can I ask if anyone on this thread still thinks that Granville was correct, when he said, in his Mathematical Intelligencer paper:

to attribute the development of life on Earth to natural selection is to assign to it–and to it alone, of all known natural “forces”–the ability to violate the second law of thermodynamics and to cause order to arise from disorder.

?

225. 225

Cantor – sorry, missed this formulation of your question:

Multiple Choice. Please select the 1 of the 4 statements below that most closely represents your views:

1) the unguided purposeless action of the four known physical laws cannot be the sole explanation for how this planet was transformed from barren and lifeless to what we see today, because that would violate the 2nd law

2) the unguided purposeless action of the four known physical laws is the sole explanation for how this planet was transformed from barren and lifeless to what we see today, and that does not violate the 2nd law

3) it is impossible to make a definitive quantitative argument either way

4) I do not know if it is possible to make a definitive quantitative argument either way

3 comes closest, but that is simply because my position is the standard scientific that no conclusion in science is ever definitive – all conclusions are provisional, and all models are incomplete.

However, what can be definitive is what a hypothesis consists of, and the evolutionary hypothesis does not require that “natural selection” has “the ability to violate the second law of thermodynamics”. Many processes can “cause order to arise from disorder” and no 2nd law violations are involved. Therefore Granville’s claim, quoted by me in 220, is incorrect.

2 would be my position were it to be reworded to reflect the conditional nature of any hypothesis, for example:

“If the unguided purposeless action of the four known physical laws were to be the sole explanation for how this planet was transformed from barren and lifeless to what we see today, this would not imply a violation of the 2nd law.”

Granville’s claim is that it would, and my position is that that claim is incorrect.

Moreoever, it is incorrect for exactly the reasons his objectors have given – that as long as local entropy decreases are “compensated for” by entropy decreases elsewhere then the 2nd law has not been violated. And “compensated” doesn’t just mean that as long as there is increased entropy on Alpha Centauri, we can have spaceships here; it means that an increase in entropy in a system must be achieved by work done on that system by another system, which necessarily will experience an entropy increase, because that’s what happens (see the 1st Law also) when work is done.

The reason the sun comes into it is that the sun is the major source of energy on earth, and it is mostly because of the sun that we have energy gradients on earth – the earth is not in equilibrium. But there are other reasons – the earth itself is still cooling, and so we have geothermal energy gradients as well.

All these gradients are lowish-entropy systems that can do work on other systems, resulting in local entropy increases in those othe systems, at the cost of decreased entropy in the system with the gradient (i.e. reduction in that gradient).

So, to summarise: my position is that Granville’s original claim was incorrect, that the rebuttals are essentially correct (“compensation” is the reason that local entropy increases are possible), and that Granville’s counter rebuttal is confused and at best involves a reduction of his argument to a restatement of Dembski’s CSI argument, which has nothing to do with thermodynamics at all.

The 2nd Law of thermodynamics argument against evolutionary theory should, I suggest, be quietly laid to rest.

226. 226

Lastly (for now!), above I wrote:

Here are two arrangements of Hs and Ts:

TTTHTTHHTT

TTTHTTHHTT

Which is more “probable”? “But they are both the same!” you say! But I will now tell you that I generated the first by using the formula, =IF(RAND()> 0.5,”H”,”T”) in Excel and pasting the results into the post, and I generated the second by carefully copying the first into a new line. So the probability of getting the first pattern is 1/(2^10), whereas the probability of getting the second is near 1 (if I’d used cut and paste, it would have been 1, but I relied on hand-typing it).

In other words, the probability of an arrangement is not discernable from looking at the arrangement, but by computing the probability of that arrangement, given a generative process.

Does anyone disagree that in order to tell which of my two arrangements of Hs and Ts was more “probable”, we need to know the process that generated them?

And if not, does anyone disagree that “probability” is not the property of an arrangement alone, but of the arrangement, given the process that is postulated to have generated it?

227. 227

Can I ask if anyone on this thread still thinks that Granville was correct, when he said, in his Mathematical Intelligencer paper:

I think it’s poorly worded. I would say that Darwinists assign to life the capacity for matter to do things under the 2LoT they would never accept anywhere else, open system or not, such as a tornado running through a junkyard and constructing a functioning 747. They only make that argument in this case because their ideology depends on it.

“It’s not impossible under the 2LoT” is not a significant argument

228. 228

In that case what does “under the 2LoT” add to the argument?

And what have tornadoes in junkyards got to do with anything? Nobody is suggesting that tornadoes can produce Boeing 747s from junkyards.

Yet sun-warmed sea can and does produce tornadoes from still air, and exquisite hexagonal crystals from water vapour.

How is this possible?

229. 229

In that case what does “under the 2LoT” add to the argument?

Darwinists tacitly accepted the argument that if the Earth were a closed system, life would be so improbable as to be considered impossible – but, the caveat has always been that Earth is an open system. Sewell is pointing out that unless the kind of order specific to what is being explained is being imported into the system from outside of it, the presence of life is as unlikely in closed system as in an open one.

Yet sun-warmed sea can and does produce tornadoes from still air, and exquisite hexagonal crystals from water vapour.

How is this possible?

Already explained. How likely certain configurations of matter are under 2LoT (in the distributive sense) is determined by physical law; snowflakes, under physical law, are not unlikely configurations. Neither are spheroid celestial objects. Nor are rainbos. “What about a snowflake?” is not a viable rebuttal to the argument at hand. There is no known effect of natural laws acting on matter that would necessarily or likely produce a highly complex, functioning self-replicating machine.

Most agree – even those outside of the ID community – that such an event is highly unlikely, to the point that some have theorized “infinite universes” to expand the distribution of chance matter configurations to accommodate the spontaneous generation of life from inanimate matter.

I suggest that even Sewell isn’t saying that life actually violates the 2LoT, but rather that it is not sufficiently explained under 2LoT unless a specific kind of order is being imported from outside the system that makes life more likely that is not currently theorized. My view is that it is information pertinent to the ordering of matter into life that is specifically being imported from outside of the system – even to this day.

230. 230

Darwinists tacitly accepted the argument that if the Earth were a closed system, life would be so improbable as to be considered impossible – but, the caveat has always been that Earth is an open system. Sewell is pointing out that unless the kind of order specific to what is being explained is being imported into the system from outside of it, the presence of life is as unlikely in closed system as in an open one.

No. Darwinists have not “tacitly accepted the argument that if the Earth were a closed system, life would be so improbable as to be considered impossible”.

Let’s consider a closed earth – one so far from the sun that we can ignore energy input from the sun, but which is still fairly warm because its core is still molten. Let us further suppose that it is covered in water, and that there are hot spots on the sea bed where the molten core is closer to the crust.

Do you agree that there will be convection currents in the sea?

And, if you do, do you agree that local decreases in entropy are perfectly possible, even though there is no input of energy from anything external to the earth?

Already explained. How likely certain configurations of matter are under 2LoT (in the distributive sense) is determined by physical law; snowflakes, under physical law, are not unlikely configurations. Neither are spheroid celestial objects. Nor are rainbos. “What about a snowflake?” is not a viable rebuttal to the argument at hand. There is no known effect of natural laws acting on matter that would necessarily or likely produce a highly complex, functioning self-replicating machine.

In other words, a snowflake isn’t improbable because you know what causes a snowflake. But a living thing is improbable because you don’t know what causes living things.

And if that isn’t what you are saying, what are you saying?

Both have are arrangements of matter lower entropy than the same elements, uniformly distributed. Why should one be any less “probable” than the other, “under the 2LoT”?

Most agree – even those outside of the ID community – that such an event is highly unlikely, to the point that some have theorized “infinite universes” to expand the distribution of chance matter configurations to accommodate the spontaneous generation of life from inanimate matter.

This is a bit of a myth. No, most people don’t agree that life is “high unlikely” in this universe. Many think that given that we are here, there are probably many other planets on which life has also emerged. Hence SETI, much beloved of IDers 🙂 There are many reasons why other universes in addition to our observable onehave been postulated, not least being the fact that we cannot see beyond the distance light can have travelled given Big Bang, the speed of light, and the rate of expansion of space. We are at the dead centre of the observable universe. I don’t think anyone thinks that is for any reason other than that we can necessarily only observe things a certain distance away, in all directions.

I suggest that even Sewell isn’t saying that life actually violates the 2LoT, but rather that it is not sufficiently explained under 2LoT unless a specific kind of order is being imported from outside the system that makes life more likely that is not currently theorized.

I think that second thing is exactly what he is saying. Where he’s wrong is to think that is a problem. Order doesn’t have to be “imported” from outside earth. It simply needs to be “imported” from outside a cell! In the sense that work needs to be done on a diffused system in order to undiffuse it, and that work needs to be done by a system outside the diffused system, as a result of which, the outside system will increase in entropy.

It’s perfectly possible that the first proto-life-forms got their increased entropy from convection currents, and it’s even possible that those convection currents got their own entropy reduction (from the state of still water) not from the sun, but from hotspots within the earth itself.

But nobody, is propposing that they didn’t get it from some system external to the life from itself, or (apart from, IDers, ironically) that the external system they got it from didn’t increase in entropy as a result of doing work on it. If the parts that make up a life form got their decreased entropy from, inter alia, a convection current, nobody is suggesting that the convection current didn’t diffuse a bit as a result.

IDers however, do (or some do) seem to be implying that, i.e. that designers can organise matter into cool non-uniform arrangements without themselves decreasing in entropy.

My view is that it is information pertinent to the ordering of matter into life that is specifically being imported from outside of the system – even to this day.

What definition of “information” are you using in this sentence?

231. 231
kairosfocus says:

Onlookers:

For the sake of completeness, I will do what I now rarely to, by making a point by point reply to KS at 186.

I note that the above exchanges are a farrago of red herrings led away to strawmen which are then set alight to distract from the core issues.

___________

>> When Granville argues against the compensation idea, he is unwittingly arguing against the second law itself.>>

1 –> The core of GS’s argument is that forces that on balance of probabilities lead to diffusion and the like, are maximally implausible as the source of constructive work. Citing his paper, A Second Look at the Second Law, again (as done at 190 above):

Note that (2) [a flow gradient expression] simply says that heat ?ows from hot to cold regions—because the laws of probability favor a more uniform distribution of heat energy . . . . From [an eqn that entails that in such a system, d’S >/= 0] (5) it follows that in an isolated, closed, system, where there is no heat ?ux through the boundary d’S >/= 0. Hence, in a closed system, entropy can never decrease. Since thermal entropy measures randomness (disorder) in the distribution of heat, its opposite (negative) can be referred to as ”thermal order”, and we can say that the thermal order can never increase in a closed system.

Furthermore, there is really nothing special about ”thermal” entropy. We can define another entropy, and another order, in exactly the same way, to measure randomness in the distribution of any other substance that diffuses, for example, we can let U(x,y,z,t) represent the concentration of carbon diffusing in a solid (Q is just U now), and through an identical analysis show that the ”carbon order” thus defined cannot increase in a closed system. It is a well-known prediction of the second law that, in a closed system, every type of order is unstable and must eventually decrease, as everything tends toward more probable states . . .

2 –> At no point have objectors provided an example of FSCO/I arising spontaneously by such dispersive forces, through their providing constructive work. This is also the implicit point in Hoyle’s example of a trornado passing through a junkyard and lo and behaond a jumbo jet emerges, NOT. By contrast, the work involving a probbaly comparable ampount of energy or even less, by men, machines and equipment working to a constructive plan will build a jumbo jet. That is we must reecognise the difference between forces that blindly and freely move things around in accord with statistical patterns and those that move them according to a plan.

3 –> This issue lies as well at the heart of the recent challenge to explain how a box of 500 coins, all H came to be. KS, EL, and others of their ilk have been adamant to refuse the best explanation [constructive work] and to refuse as well to recognise that due to the differing statistical weights of clusters of microstates, such a 500H state arising by random tossing is prwqctically unobservable on the gamut of the solar system.

4 –> Notice, also, GS has put the issue of forces of diffusion at the pivot of his case, and indeed that at once allows us to see that when he speaks of X-entropy, he is speaking of the sort of thing that makes C diffuse even in the solid state.

>>It’s easy to see why. Imagine two open systems A and B that interact only with each other. Now draw a boundary around just A and B and label the contents as system C.>>

5 –> Here KS revisits Clausius’ first example, which appears in my always linked note and which is clipped in the FYI-FTR, he is about to refuse to look seriously at what is happening at micro level when d’Q of heat moves from A at a higher temp to B at a lower. In short he leads away via a red herring and erects and burns a strawman. Let me lay out the summary that was there for literally years in App 1 my note:

1] TMLO: In 1984, this well-received work provided the breakthrough critical review on the origin of life that led to the modern design school of thought in science. The three online chapters, as just linked, should be carefully read to understand why design thinkers think that the origin of FSCI in biology is a significant and unmet challenge to neo-darwinian thought. (Cf also Klyce’s relatively serious and balanced assessment, from a panspermia advocate. Sewell’s remarks here are also worth reading. So is Sarfati’s discussion of Dawkins’ Mt Improbable.)

2] But open systems can increase their order: This is the “standard” dismissal argument on thermodynamics, but it is both fallacious and often resorted to by those who should know better. My own note on why this argument should be abandoned is:

a] Clausius is the founder of the 2nd law, and the first standard example of an isolated system — one that allows neither energy nor matter to flow in or out — is instructive, given the “closed” subsystems [i.e. allowing energy to pass in or out] in it:

Isol System:

| |(A, at Thot) –> d’Q, heat –> (B, at T cold)| |

b] Now, we introduce entropy change dS >/= d’Q/T . . . “Eqn” A.1

c] So, dSa >/= -d’Q/Th, and dSb >/= +d’Q/Tc, where Th > Tc

d] That is, for system, dStot >/= dSa + dSb >/= 0, as Th > Tc . . . “Eqn” A.2

e] But, observe: the subsystems A and B are open to energy inflows and outflows, and the entropy of B RISES DUE TO THE IMPORTATION OF RAW ENERGY.

f] The key point is that when raw energy enters a body, it tends to make its entropy rise. This can be envisioned on a simple model of a gas-filled box with piston-ends at the left and the right:

=================================
||::::::::::::::::::::::::::::::::::::::::::||
||::::::::::::::::::::::::::::::::::::::::::||===
||::::::::::::::::::::::::::::::::::::::::::||
=================================

1: Consider a box as above, filled with tiny perfectly hard marbles [so collisions will be elastic], scattered similar to a raisin-filled Christmas pudding (pardon how the textual elements give the impression of a regular grid, think of them as scattered more or less hap-hazardly as would happen in a cake).

2: Now, let the marbles all be at rest to begin with.

3: Then, imagine that a layer of them up against the leftmost wall were given a sudden, quite, quite hard push to the right [the left and right ends are pistons].

4: Simply on Newtonian physics, the moving balls would begin to collide with the marbles to their right, and in this model perfectly elastically. So, as they hit, the other marbles would be set in motion in succession. A wave of motion would begin, rippling from left to right

5:As the glancing angles on collision will vary at random, the marbles hit and the original marbles would soon begin to bounce in all sorts of directions. Then, they would also deflect off the walls, bouncing back into the body of the box and other marbles, causing the motion to continue indefinitely.

6: Soon, the marbles will be continually moving in all sorts of directions, with varying speeds, forming what is called the Maxwell-Boltzmann distribution, a bell-shaped curve.

7: And, this pattern would emerge independent of the specific initial arrangement or how we impart motion to it, i.e. this is an attractor in the phase space: once the marbles are set in motion somehow, and move around and interact, they will soon enough settle into the M-B pattern. E.g. the same would happen if a small charge of explosive were set off in the middle of the box, pushing our the balls there into the rest, and so on. And once the M-B pattern sets in, it will strongly tend to continue . . . .

for the injection of energy to instead do predictably and consistently do something useful, it needs to be coupled to an energy conversion device.

g] When such energy conversion devices, as in the cell, exhibit FSCI, the question of their origin becomes material, and in that context, their spontaneous origin is strictly logically possible but — from the above — negligibly different from zero probability on the gamut of the observed cosmos. (And, kindly note: the cell is an energy importer with an internal energy converter. That is, the appropriate entity in the model is B and onward B’ below. Presumably as well, the prebiotic soup would have been energy importing, and so materialistic chemical evolutionary scenarios therefore have the challenge to credibly account for the origin of the FSCI-rich energy converting mechanisms in the cell relative to Monod’s “chance + necessity” [cf also Plato’s remarks] only.)

h] Now, as just mentioned, certain bodies have in them energy conversion devices: they COUPLE input energy to subsystems that harvest some of the energy to do work, exhausting sufficient waste energy to a heat sink that the overall entropy of the system is increased. Illustratively, for heat engines — and (in light of exchanges with email correspondents circa March 2008) let us note: a good slice of classical thermodynamics arose in the context of studying, idealising and generalising from steam engines [which exhibit organised, functional complexity, i.e FSCI; they are of course artifacts of intelligent design and also exhibit step-by-step problem-solving processes (even including “do-always” looping!)]:

| | (A, heat source: Th): d’Qi –> (B’, heat engine, Te): –>

d’W [work done on say D] + d’Qo –> (C, sink at Tc) | |

i] A’s entropy: dSa >/= – d’Qi/Th

j] C’s entropy: dSc >/= + d’Qo/Tc

k] The rise in entropy in B, C and in the object on which the work is done, D, say, compensates for that lost from A. The second law — unsurprisingly, given the studies on steam engines that lie at its roots — holds for heat engines. [–> Notice, I have addressed the compensation issue all along.]

l] However for B since it now couples energy into work and exhausts waste heat, does not necessarily undergo a rise in entropy having imported d’Qi. [The problem is to explain the origin of the heat engine — or more generally, energy converter — that does this, if it exhibits FSCI.] [–> Notice the pivotal question being ducked in the context of the origin of cell based life, through red herrings and strawmen.]

m] There is also a material difference between the sort of heat engine [an instance of the energy conversion device mentioned] that forms spontaneously as in a hurricane [directly driven by boundary conditions in a convective system on the planetary scale, i.e. an example of order], and the sort of complex, organised, algorithm-implementing energy conversion device found in living cells [the DNA-RNA-Ribosome-Enzyme system, which exhibits massive FSCI].

n] In short, the decisive problem is the [im]plausibility of the ORIGIN of such a FSCI-based energy converter through causal mechanisms traceable only to chance conditions and undirected [non-purposive] natural forces. This problem yields a conundrum for chem evo scenarios, such that inference to agency as the probable cause of such FSCI — on the direct import of the many cases where we do directly know the causal story of FSCI — becomes the better explanation. As TBO say, in bridging from a survey of the basic thermodynamics of living systems in CH 7, to that more focussed discussion in ch’s 8 – 9:

While the maintenance of living systems is easily rationalized in terms of thermodynamics, the origin of such living systems is quite another matter. Though the earth is open to energy flow from the sun, the means of converting this energy into the necessary work to build up living systems from simple precursors remains at present unspecified (see equation 7-17). The “evolution” from biomonomers of to fully functioning cells is the issue. Can one make the incredible jump in energy and organization from raw material and raw energy, apart from some means of directing the energy flow through the system? In Chapters 8 and 9 we will consider this question, limiting our discussion to two small but crucial steps in the proposed evolutionary scheme namely, the formation of protein and DNA from their precursors.

It is widely agreed that both protein and DNA are essential for living systems and indispensable components of every living cell today.11 Yet they are only produced by living cells. Both types of molecules are much more energy and information rich than the biomonomers from which they form. Can one reasonably predict their occurrence given the necessary biomonomers and an energy source? Has this been verified experimentally? These questions will be considered . . . [Bold emphasis added. Cf summary in the peer-reviewed journal of the American Scientific Affiliation, “Thermodynamics and the Origin of Life,” in Perspectives on Science and Christian Faith 40 (June 1988): 72-83, pardon the poor quality of the scan. NB:as the journal’s online issues will show, this is not necessarily a “friendly audience.”]

[[–> in short this question was actually addressed in the very first design theory work, TMLO, in 1984, so all along the arguments we are here addressing yet again are red herrings led away to strawmen soaked in ad hominems as we will see again below, and set alight to cloud, confuse, poison and polarise the atmosphere.]

>>Because A and B interact only with each other, and not with anything outside of C, we know that C is an isolated system (by definition). The second law tells us that the entropy cannot decrease in any isolated system (including C). We also know from thermodynamics that the entropy of C is equal to the entropy of A plus the entropy of B.>>

6 –> KS is setting up his red herring and strawman version.

>>All of us (including Granville) know that it’s possible for entropy to decrease locally, as when a puddle of water freezes. So imagine that system A is a container of water that becomes ice.>>

7 –> Having dodged the pivotal issues of dispersive forces like diffusion being asked to carry out constructive work resulting in organisation of something that is rich in FSCO/I, KS gives an irrelevant example, of order emerging by mechanical necessity acting int eh context of heat outflow, where the polar molecules of water will form ice crystals on being cooled enough. This very example is specifically addressed in TMLO, and I have already spoken to this and similar cases.

8 –> By contrast, hear honest and serious remarks by Wicken and Orgel (which since 2010 have sat in the beginning of section D, IOSE intro-summary page, so KS either knows of or should know of this):

WICKEN, 1979: >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)] >>

ORGEL, 1973: >> . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.]>>

9 –> KS, of course, has presented to us a case of crystallisation, as though it is an answer to the matter at stake. At this point, given his obvious situation as a highly informed person, this is willful perpetuation of a misrepresentation, which has a short sharp, blunt three-letter name that begins with L.

>>Note:

1. The entropy of A decreases when the water freezes.

2. The second law tells us that the entropy of C cannot decrease.

3. Thermodynamics tells us that the entropy of C is equal to the entropy of A plus the entropy of B.

4. Therefore, if the entropy of A decreases, the entropy of B must increase by at least an equal amount to avoid violating the second law.>>

10 –> In rthese notes, KS ducks his intellectual responsibility to address just what happens with B so that the overall enrropy is increased. Namely, that precisely because of the rise in accessible energy, the number of ways for energy and mass to be arranged at micro level, so far increases as to exceed the loss in number of ways of A.

11 –> And, the exact same diffusive and dissipative forces already described strongly push B towards the clusters of states with the highest statistical weights, and away from those clusters with very low statistical weights. So, by importing energy B’s entropy increases and by enough that the net result is at minimum to have entropy of the system constant.

12 –> It is the statistical reasoning linked to this, and the onward link tot he information involved, thence the information involved in functionally specific complex organisation, thence the need for constructive work rather than expecting diffusion and the like to do spontaneously this for “free” that are pivotal to the case that KS has here distracted form and misrepresented. (Cf my microjets in a vat thought exercise case study here, which has been around since when, was it 2008 or so? And even if KS was ignorant of that, he had the real import of Hoyle’s argument, a contrast between what chaotic forces do and what planned constructive work does, as well as access to the points made by Orgel and Wicken. Likewise we can compare what Shapiro and Orgel said in their exchange on OOL. Golf balls do not in our experience play themselves around golf courses by lucky clusters of natural forces. If pigs could fly scenarios are nonsense. And the rock avalanche spontaneously forms Welcome to Walses at the border of Wales example has been around for a long time too. All of these are highlighting the difference in capability between blind chance and mechanical necessity and intelligently directed constructive work.)

>>The second law demands that compensation must happen. If you deny compensation, you deny the second law.>>

13 –> A sad excuse to play at red herrings and strawmen.

>>Thus Granville’s paper is not only chock full of errors, it actually shoots itself in the foot by contradicting the second law!>>

14 –> here comes the smoke of burning, ad hominem soaked strawmen , now.

>>It’s a monumental mess that belongs nowhere near the pages of any respectable scientific publication. The BI organizers really screwed up when they accepted Granville’s paper. >>

15 –> throwing on more ad hominems to the fire to make even more polarisation, clouding of issues and poisoning of the atmosphere.
____________

KS’s grade is F- – (F double minus), for willful failure to do duties of care to accuracy, substantial issues at stake, and fairness.

The grade of those who tried to pile on, building on his presumed expertise and the assumption that design thinkers are ignorant, stupid, insane or wicked, is similarly F – – -, as they should know better.

But onlookers, I hardly expect such to listen or accept correction, on long and sad track record. years form now they will still be presenting these fallacies as though they were correct answers to the pivotal problem of accounting for constructive work to erect FSCO/I rich systems.

how do I know this?

Easy, this is what has been going on since at least the 1990’s on this topic. And similar loaded strawman misrepresentation games are the heart of the Darwinist objections to design theory as the UD weak argument correctives highlight.

KF

PS: I should again note as well just for completeness, the summary on the nature of entropy as linked to information, that is of all places nicely put in Wiki’s article on informational entropy:

At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann’s constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing.

But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, “Gain in entropy always means loss of information, and nothing more” . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).

232. 232

No. Darwinists have not “tacitly accepted the argument that if the Earth were a closed system, life would be so improbable as to be considered impossible”.

Yes, they did. When IDists called them on it, they started changing their tune. Then, Orwellian-style, you and others start re-writing history.

This is a bit of a myth.

No, it’s not. I’ve read many papers and books myself over the course of my life that attempt to explain how life could be generated even though it is so improbable.

In other words, a snowflake isn’t improbable because you know what causes a snowflake. But a living thing is improbable because you don’t know what causes living things.

No. A snowflake is not improbable because extrapolations of the basic forces and materials involved indicate it is not improbable. Life is improbable because extrapolations of the fundamental forces and materials offer no expectation that they should (or even could) generate a complex, self-reproducing machine.

If naturalism is true, we know what causes both snowflakes and life – physics. Physics predicts snowflakes; physics does not predict the spontaneous generation of complex, self-replicating machines operating via code and translation procedures.

Pardon my bluntness, but it is idiotic to make an equivalence between a snowflake and a living, self-replicating organism. Your service to your ideology is apparently making you say absurd things (DDS).

233. 233
cantor says:

God forbid (so to speak) that someone talk about entropy, evolution and open systems when discussing a paper entitled Entropy, Evolution and Open Systems.

Apparently when you say “OK” you don’t mean “OK”. Why do I find that not surprising?

God forbid that someone talk about entropy and open systems on a barren lifeless planet.

234. 234

Yes, they did. When IDists called them on it, they started changing their tune. Then, Orwellian-style, you and others start re-writing history.

I’m not rewriting anything. I’m simply trying to explain why I think Granville’s argument is wrong.

No, it’s not. I’ve read many papers and books myself over the course of my life that attempt to explain how life could be generated even though it is so improbable.

Well, it is not true that the reason people have proposed that there is more to the universe than the part we can see has nothing to do with how improbable life would be if there was only our bit. Some of it has been inspired by the weirdness that we do seem to have a universe that seems arbitrarily suited to render life probable, but that is rather different from the argument that life is improbable even given the parameters of this universe.

In any case, it is not the contention of the vast proportion of “evolutionists” that life is necessarily particularly improbable in this universe.

In other words, a snowflake isn’t improbable because you know what causes a snowflake. But a living thing is improbable because you don’t know what causes living things.

No. A snowflake is not improbable because extrapolations of the basic forces and materials involved indicate it is not improbable. Life is improbable because extrapolations of the fundamental forces and materials offer no expectation that they should (or even could) generate a complex, self-reproducing machine.

I think you need to show your work. Also, demonstrate what you mean by “improbable”. We know that snowflakes are probable because we have a frequency distribution. We don’t have one for life.

If naturalism is true, we know what causes both snowflakes and life – physics. Physics predicts snowflakes; physics does not predict the spontaneous generation of complex, self-replicating machines operating via code and translation procedures.

But this is mere assertion. It is not an argument from physics.

Pardon my bluntness, but it is idiotic to make an equivalence between a snowflake and a living, self-replicating organism. Your service to your ideology is apparently making you say absurd things (DDS).

I didn’t make an equivalence, in any sense other than that under the 2nd Law the low entropy of a snowflake,relative to the same molecules diffused, or of a tornado for that matter, is perfectly explicable under the 2nd Lawn, and so invoking the 2nd Law to support the claim that life is improbable is no more valid that invoking 2nd law to support the (false) claim that tornadoes are improbable.

You can’t predict a tornado on the basis of fundamental physics, otherwise we’d be able to keep out of their way. Whether a specific tornado forms is unpredictable, yet they do. Whether life forms on a specific planet might be similarly unpredictable, yet we know it formed on at least one.

Nothing in the laws of physics, including the 2nd law of thermodynamics tells us that life is improbable, only that if it is improbable it is improbable.

235. 235
CS3 says:

keiths

When I allowed that “compensation” is valid in the case of thermal entropy, I meant that you can compare thermal entropy values to thermal entropy values, and there is an inequality that must be satisfied, as you did in your example. However, I would not really call this the “compensation argument,” because the increase in B is not merely “compensating” for the decrease in A in the sense of being an unrelated event that merely “offsets” it – it is, in this case, a necessary effect of the decrease.

When Sewell uses the term “compensate”, I believe he is referring only to cases in which the increase is an unrelated event that merely “offsets” an improbable event of another type, according to some global accounting scheme, as it is used in the Styer and Bunn papers. That definition is consistent with what you referred to as his “misinterpretation” of the compensation argument:

Granville misinterpets the compensation argument as saying that anything, no matter how improbable, can happen in a system as long as the above criterion is met.

This is obviously wrong

I’m glad we agree that this is obviously wrong. However, can you explain how the methodology used by Styer and Bunn cannot be used to show that “anything, no matter how improbable, can happen in a system as long as the above criterion is met?” Just substitute the probability ratio of, say, a set of a thousand coins going from half heads and half tails to all heads in place of their estimate for the increase in improbability of organisms due to evolution. Plug that into the Boltzmann formula, and compare to the thermal entropy increase. If its magnitude is less, the Second Law is satisfied.

It is hard to believe that such a silly argument would even need refuting, but it does, because, like you implied in 170, many seem to think it doesn’t really matter how good or bad the argument is, as long as the conclusion is “correct”, so no one else (as far as I know) has challenged them on this yet.

236. 236
keiths says:

CS3,

You and Granville have fallen into the trap of wanting the second law to do more than it actually does. The second law forbids violations of the second law, no more and no less.

Let me give an example of the same error, but in terms of the first law.

Suppose a friend of yours claims that gerbils keep poofing into existence in his living room. He is constantly giving gerbils away to his friends as a result of the alleged gerbil influx.

You find this wildly implausible, but he is adamant that it really happens. You try to reason him out of his delusion by showing him that gerbils can’t possibly materialize out of thin air.

One of your arguments is that if gerbils really did poof into existence in his living room, this would be a violation of the first law of thermodynamics, which says that energy can be neither created nor destroyed. Since matter is a form of energy (by Einstein’s famous equation), the appearance of a gerbil out of thin air would violate the first law.

He tells you he’s made careful measurements that show that every time a gerbil appears, the mass of the furniture in the living room decreases by a corresponding amount. In other words, the incarnation of the gerbil is compensated for by a decrease in mass of the living room furniture.

You find this absurd and tell him “This compensation argument is bogus. The first law doesn’t allow gerbils to poof into existence merely because there is a compensatory loss of mass in the living room furniture!”

But if you tell him this, you are wrong.

The first law does allow gerbils to poof into existence, because the first law only forbids violations of the first law, no more and no less. As long as the mass of the furniture decreases by the correct amount, there is no violation of the first law.

The gerbil-poofing idea is still ridiculous, and you have many reasons to doubt it, but the first law is not one of them, because the first law is not violated. The first law is not obligated to rule out every wildly improbable event in the universe, including gerbil poofing. It only rules out violations of the first law.

Likewise with evolution and the second law. You and Granville may (and obviously do) think that evolution is ridiculous, and that people and locomotives and lava lamps can’t appear on a formerly barren planet simply because solar energy is streaming in and waste heat is radiating out. But your skepticism has nothing to do with the second law, because the second law is not violated.

You just think evolution is improbable, like every other IDer and creationist.

237. 237
Joe says:

The equivocation continues:

You just think evolution is improbable, like every other IDer and creationist.

Nope. We say there isn’t any evidence that unguided evolution can account for multi-protein configurations. And that happens to be a fact.

We also say there isn’t any way to test the claim that unguided evolution can account for the diversity of life observed. That also happens to be a fact.

And until you produce evidence for blind and undirected chemical and physical processes actually producing something of note, then you cannot show that Granville is wrong.

238. 238
keiths says:

kairosfocus,

There is not one place in that avalanche of words where you say something like “Step 3 is wrong, and this is why.”

You can’t refute my simple 4-step argument, and the onlookers know it.

239. 239

CS3:

When Sewell uses the term “compensate”, I believe he is referring only to cases in which the increase is an unrelated event that merely “offsets” an improbable event of another type, according to some global accounting scheme, as it is used in the Styer and Bunn papers. That definition is consistent with what you referred to as his “misinterpretation” of the compensation argument:

Neither Styer nor Bunn seem to me to be saying that the system experiencing the “compensatory” increase in entropy” is an “unrelated event”.

Clearly, if work is done by one system on another the two system are related.

If a vortex generated on earth by a shaft of sunlight warming a patch of ground, and causing a convection current thus reduces the entropy of the air above the patch of warm ground, there is no violation of the 2nd Law, because as a result of the convection current, the patch of ground cools, or would cool if it were not re-warmed by the sun.

And the reason the sun can warm that patch of ground is that the 2nd Law does not forbid it, because the sun is hotter than the earth. If the sun did not heat the earth then it would be more difficult for local entropy decreases to occur on earth, but not impossible, because the earth’s surface is also warmed by the interior of the earth, so it’s possible the sun is not necessary, but the fact remains that the heating of the sun by the earth, especially the fact that the earth is also turning, so it is sequentially warmed and cooled, leading to both spatial and temporal temperature gradients gives vast numbers of opportunities for local decreases of entropy to occur on earth (in other words for small systems to rise in entropy as their surroundings decrease).

A prime example, which no-one seems to want to consider, is tornadoes which are massive local systems of reduced entropy, and which would, if Granville were correct, but violations of the 2nd Law of thermodynamics.

But more to the point is this continued use of the word “probability” without reference to the generative process posutated to have given rise to the observed arrangement.

Clearly shining the sun on a series of coin-tosses cannot make “all heads” more probable than it would be in the absence of the sun. The answer is simple: the sun makes no difference at all to the coin-tossing system – it does no work on it that can possibly affect the outcome.

But shine the sun on the patch of ground below a layer of still air, which would have its molecules distributed in a uniform arrangement with respect to their next direction of travel, and you will quickly get an equivalent result to “all heads” – all the molecules travelling in the same direction.

Sure, a tornado cannot rearrange rubble to form a town, any more than sunshine can rearrange coins to form All Heads. But that’s not because the 2nd Law forbids it; it’s just not what is probable following a tornado. However, many things that would be highly improbable in the absence of a tornado becom highly probable in the presence of a tornado, such as sofas landing in trees, and previously scattered bits and pieces being deposited in a single pile.

It’s not that increasing entropy in some distant part of the universe magically makes invisible pink unicorns more probable on earth. Obviously it doesn’t, and no-one, certainly not Styer or Bunn, make any case remotely resembling that.

But entropy in a system can dramatically increase in a variety of different ways in response to work being done on that system by a surrounding system, which, in turn, must increase in entropy as a result of the work done.

As others have said: this is not proof of evolution; but it is certainly a compelling rebuttal of Granville’s case the living things (which are low entropy systems) cannot have arisen spontaneously on earth, because that would involve a violation of the 2nd Law, not helped by the input of our sun.

In fact the sun hugely increases the probability of local entropy decreases because it is a major cause of temperature gradients, and and thus, for example, of convection currents in fluids.

One good reason not to expect life to evolve on planets with no water or atmosphere.

240. 240
Alan Fox says:

…a tornado for that matter, is perfectly explicable under the 2nd Lawn

And I thought it was moles!

241. 241
kairosfocus says:

KS: You erected and burned an ad hominem laced strawman. I took apart the strawman tactic step by step. Your response is to try another strawman directed at me. Your grade just sank to F – – – – . KF

242. 242
Alan Fox says:

@ KF

I think you can leave it to a fair reader to decide who has presented a rational argument and who hasn’t.

That anyone could write

..a farrago of red herrings led away to strawmen which are then set alight…

and then wonder why his comments are the object of ridicule among the few who bother to read them… well, you know what I’m thinking.

PS @ KF

I can’t find the comment where you talk about spending time alone on a street corner protesting. It wasn’t about the Redemption Song statue was it? How did that go?

243. 243
kairosfocus says:

F/N: The second law operates at two levels, classical where it is in effect just another thermodynamic variable. At statistical level it is found to be tied closely to statistical weights of clusters of microstates and linked processes, such as diffusion. It is the second level that brings out the information issues tied to constructive work and highlights the folly of suggesting that diffusion and similar processes can reasonably be expected to perform constructive work ending in FSCO/I, whether in isolated, closed or open systems (to use the terminology I prefer). GS is right that when a system is opened up to mass and energy flows, constructive work does not suddenly need no specific explanation. If something is overwhelmingly unlikely in an isolated system as a spontaneous process, it will remain extremely unlikely when the system is opened up unless something specific is going on that drastically enhances its likelihood, e.g. a plan and process for construction. And, I suspect the “can ANYTHING happen in an open system” was not meant literally. KF

244. 244
Alan Fox says:

Talk of the “laws” of the universe always makes me smile. As if the fundamental particles, fields and waves carry a rule-book and refer to it as necessary. In reality, we, as observers, are attempting to make mathematical models of what we observe and then test them against observations.

245. 245
kairosfocus says:

AF: All you just did was try to excuse yourself from actually thinking about what is going on on the thermodynamics. And, for cause I stand by my note that KS played and continues to play red herring and strawman tactics, as I took time to show. When you can show us empirically observed cases of forces of diffusion or the like [or for that matter tornadoes hitting junkyards or hardware stores] spontaneously performing constructive work issuing in functionally specific complex organisation you will have something worthwhile to say. Meanwhile, no substance, all rhetoric. KF

246. 246
Alan Fox says:

KK

I really have nothing to say about Granville Sewell’s paper on thermodynamics. I think Joe Felsenstein, Keith and Lizzie have dealt with it effectively. Regarding entropy and the second law, I have been reading up, and admit I am struggling with the concepts. It seems to get a proper understanding, one needs to work through the history and I am only up to the sixties and haven’t yet got to grips with Feynman. You, and your steel balls, seem stuck in an earlier classical period.

247. 247

Kairosfocus:

Which has greater entropy, a tornado or still air?

Nobody seems to want to answer this, yet it is neither a red herring nor a straw man.

248. 248
cantor says:

Liddle@219 wrote:
Let me ask you one question:

1. Does a tornado have less, or more, or the same, entropy, of any sort, than still air?

Yes.

249. 249
CS3 says:

Neither Styer nor Bunn seem to me to be saying that the system experiencing the “compensatory” increase in entropy” is an “unrelated event”.

Clearly, if work is done by one system on another the two system are related.

Again, you are imposing your view on them. As I said earlier:

To be clear, though, that is definitely not the position Asimov, Styer, Bunn, and Lloyd were making. If they did not think anything improbable was happening, then there would be no need for them to convert the probabilities of improbable events into entropies and compare that to a different type of entropy to satisfy an inequality. And, even if the energy were causing these events, it makes no sense for them to try to convert from the original improbability of what happened to how much energy is needed. It takes energy to flip coins, but it takes no more energy to flip all heads than to flip half heads and half tails.

If I think energy is simply making something, for example, a plant forming a flower, not improbable (and I would agree in this case), I say, as you do, that energy is making that something not improbable. Perhaps I provide some details of a mechanism by which that might be the case. If I want to know how much energy is required, I analyze the mechanism, or perhaps perform an experiment if possible. I do not count the number of microstates of flower and plug it into the Boltzman formula to see how much energy I need, not even as an upper or lower bound. I only do that if I am trying to compensate improbable events with events that, if reversed would, be more improbable, according to some global accounting scheme.

Hopefully you can forgive Sewell for writing a paper that responds to the arguments in the literature rather than to the personal views of UD posters.

250. 250

I’m not quite sure of the point you are trying to make by not answering mine.

251. 251

And let me remind you of the part of my post following the question I posted, which you snipped:

If your answer depends on the kind of entropy, please say which entropies are greater, the same, or less, in a tornado than in still air.

252. 252
Axel says:

‘Evolutionists disagree because they believe that biological systems spontaneously organize themselves. That is, exactly the inverse of what the 2nd law states.’

Nirwad, evah a traeh. S’taht ytterp gnorts leurg rof ruo sdneirf, ey nek.

253. 253
Axel says:

Seigolopa. ‘gnorts leurg’ dluohs spahrep be, ‘niht leurg’.

254. 254

CS3:

If I think energy is simply making something, for example, a plant forming a flower, not improbable (and I would agree in this case), I say, as you do, that energy is making that something not improbable. Perhaps I provide some details of a mechanism by which that might be the case. If I want to know how much energy is required, I analyze the mechanism, or perhaps perform an experiment if possible. I do not count the number of microstates of flower and plug it into the Boltzman formula to see how much energy I need, not even as an upper or lower bound. I only do that if I am trying to compensate improbable events with events that, if reversed would, be more improbable, according to some global accounting scheme.

Hopefully you can forgive Sewell for writing a paper that responds to the arguments in the literature rather than to the personal views of UD posters.

It would make sense for Sewell to address the offered rebuttals to the claims he himself has made, which include the claim that if evolution is the explanation for the development of life on earth, then natural selection must the capacity to violate the second law.

Both Styer and Amory attempted to show, by casting the problem in terms of entropy, that this is not the case.

If you want specific hypothesised mechanisms, then there are plenty in the biochemical, genetics and population genetics literature. You may not find them persuasive, but that does not mean that the 2nd Law would have had to have been violated for them to occur. None of the postulated processes (unlike the Design hypothesis) involves anything other than normal physics and chemistry.

And if all Sewell means is that, like Dembski, he finds the evolutionary hypothesis implausible, then his argument has no more to do with the 2nd Law of Thermodynamics than does a teacher’s skepticism when a child claims that her dog ate her homework. We don’t need Boltzmann’s equations to calculate the level of her incredulity.

Sewell finishes his New Perspective piece with this paragraph:

But one would think that at least this would be considered an open question, and those who argue that it really isextremely improbable, and thus contrary to the basic principle underlying the second law of thermodynamics, would be given a measure of respect, and taken seriously by their colleagues, but we aren’t.

This simple reason why those who have read Granville’s work do not take it seriously is that he has simply gussied up an argument from incredulity with some fancy equations that have absolutely nothing to do with biology or genetics or natural selection, and essentially said:

I think evolution is improbable, and because the 2nd Law says that improbable things are more improbable than probable things, evolution is improbable.

255. 255
Axel says:

Cantor did reply to your question, EBL. It was a single word, ‘Yes’ – which I shall paraphrase for you, as follows: ‘How long is a piece of string?

256. 256
Joe says:

Alan Fox:

I think you can leave it to a fair reader to decide who has presented a rational argument and who hasn’t.

Rational argument? We can leave it to the fair reader to notice that your guys have failed to produce any EVIDENCE tat demonstrates Granville is wrong.

257. 257

Evolutionists disagree because they believe that biological systems spontaneously organize themselves. That is, exactly the inverse of what the 2nd law states.

If that was the inverse of what the 2nd law states, how come tornadoes spontaneously organise themselves?

258. 258
cantor says:

Axel @ 251
“How long is a piece of string”?

I almost choked laughing. You made my day.

259. 259
Axel says:

They dinnae. God does it.

260. 260

So, guys,

Which has greater entropy, a tornado or still air?

Surely someone must have a view on this!

261. 261
Axel says:

Go to the top of the class, Lidds.

262. 262

I almost choked laughing. You made my day.

OK, so cantor thinks that entropy is a meaningless concept that you can’t measure anyway.

So obviously he is not going to be persuaded by Granville’s argument, and I guess that goes for Axel as well.

Anyone like to speak up for the validity of entropy as a measure of whether a postulated process would violate the 2nd Law of thermodynamics?

263. 263
Axel says:

Well, Cantor, I remember the young officer in charge of us on the firing range, seeming to be terrified that I would accidentally shoot a colleague inadvertently. He must have thought I was a bit absent-minded.

264. 264
Axel says:

Well, Cantor, I remember the young officer in charge of us on the firing range, seeming to be terrified that I would accidentally shoot a colleague inadvertently. I suppose he thought I seemed a bit absent-minded.

Paraphrasing one word isn’t that easy, you know.

265. 265
cantor says:

EBL @ 246:

It took a while to get there, but you eventually sorta answered it. Thank you.

I’m not quite sure of the point you are trying to make by not answering mine.

Unlike you and KS, I accurately answered the question you asked, the first time, instead of writing an essay on an unrelated topic.

EBL @ 247:
And let me remind you of the part of my post following the question I posted, which you snipped…

No reminding is necessary. The sentence started with the word “if”. If the consequent is not true, then neither is the antecedent.

266. 266
keiths says:

CS3,

Do you see that the second law is as irrelevant to your doubts about evolution as the the first law would be to your doubts about gerbil-poofing?

(I’ll bet that’s the first time that sentence has ever been written in the history of the English language.)

Imagine this hypothetical dialogue:

keiths:
If compensation happens, then the second law is not violated.

Granville:
That can’t be true! Improbable things are still improbable!

keiths:
Of course they are. But if compensation happens, then the second law isn’t violated.

No one is claiming that compensation is an explanation for anything other than why the second law is not violated. Compensation does not explain evolution; it merely explains why evolution does not violate the second law.

There are two separate questions:

1. Does evolution violate the second law?

2. Is evolution improbable?

The answer to #1 is ‘no’, and compensation shows this.

The answer to #2 is ‘yes’ according to you and Granville, and neither compensation nor the second law has anything to do with that.

You are both merely arguing that evolution is improbable. Just like IDers and creationists everywhere.

267. 267
cantor says:

EBL @ 258:
cantor thinks that entropy is a meaningless concept

Libelous. I never said any such thing. Apology expected.

268. 268

Cantor, if you don’t think the concept meaningless, that’s great, and I certainly apologise for suggesting that you did.

But in that case, perhaps you would answere my question.

So let me try again:

1. Does a tornado have less, or more, or the same, entropy, of any sort, than still air?

You answered “yes”, which makes no sense to me.

If entropy is a meaningful concept, then it must be possible to evaluate the relative entropies of still air versus a tornado, no?

And if the answer depends on the kind of entropy, I invited you to specify which.

269. 269
cantor says:

EBL @ 264:

Does a tornado have less, or more, or the same, entropy, of any sort, than still air?

You answered “yes”, which makes no sense to me.

Yes, a tornado has less, or more, or the same, entropy, of any sort, as still air.

If entropy is a meaningful concept, then it must be possible to evaluate the relative entropies of still air versus a tornado, no?

Yes.

And if the answer depends on the kind of entropy, I invited you to specify which.

I responded to this in an earlier post.

270. 270
keiths says:

cantor seems to be afraid of the question. I think I know why.

271. 271

If a tornado can have less, more, or the stame entropy as still air, can you explain the conditions under which it would have:

1. less
2. more
3. the same.

You accused me earlier of being evasive, despite the fact that I had make my position very clear (it not being one of the ones you had offered).

And if you have addressed it in an earlier post, please give me the post number, because I cannot find a post by you in which you have addressed this question.

272. 272
Thomas2 says:

EL (at 158) –
[originally posted at 2.39pm July 5th]:

It seems surprising that an argument against unintelligent or blind evolution should be countered as “an argument from incredulity”.

This phraseology seems to suggest that skeptics should uncritically acquiesce to arguments from credulity, and that Darwinian evolutionary science relies on arguments from gullibility!

Science requires adequate positive evidence for its claims, and is required to be accessible to proper scrutiny: “he who asserts must prove”. A healthy skepticism should be welcomed, not disparaged, surely?

(I don’t mean this unkindly – I appreciate your posts).

273. 273
Thomas2 says:

KS (at 153) –
[originally posted at 2.19pm July 5th]:

I am a novice here, but I have an interest in whether or not mindless evolution does in fact violate the 2nd Law or not, so I’d be grateful for your view on whether the following is on the right track:

For your apparatus to work, the solar cell will have to power a heat pump, and the operational efficiency/inefficiency of the heat pump will provide the necessary compensation.

For undirected (blind, mindless) evolution to work without violating the 2nd Law, natural selective processes (successful competition between organisms with differential functionality-selectivity-fecundity, simplifying and ignoring luck) will presumably provide the equivalent role of the heat pump.

Entropy (or X-entropy) will, however, be quantified by an appropriate measure of complexity, not functionality-selectivity-fecundity (since what we are concerned with here is the unplanned/unintelligent/mindless development of “organised”, or “specified”, complexity).

Thus, in order to demonstrate that undirected evolution works without violating the 2nd Law and that natural selective processes can indeed supply the role of a heat pump, it needs to be demonstrated empirically that there exists (on average) a significant positive correlation between appropriately quantified increases in functionality-selectivity-fecundity and appropriately quantified increases in complexity.

Is there any empirical evidence which would reliably suggest this?

274. 274
cantor says:

cantor seems to be afraid of the question. I think I know why.

The one-man peanut gallery is back.

You have no clue whatsoever.

275. 275

Thomas2:

You misunderstand me: I think arguments from incredulity are perfectly valid. I do not know that there are no invisible pink unicorns, but that is an argument from incredulity. I see no reason to think there are, and it runs counter to my entire understanding of the way the world works.

My point is merely that Granville’s argument is, simply such an argument – the 2nd Law part is irrelevant to it, and indeed, wrong. Nothing about the postulated evolutionary mechanisms to explain life involve any violation of the 2nd Law of thermodynamics.

And thanks for your kindly words 🙂

276. 276

Well, cantor, it’s certainly not clear to me why you will not give me a straightforward response. It would be hugely clarifying if you did, because it might tell me why you seem to think that tornadoes are perfectly possible under the 2nd Law, but life is not.

I’m not sure whether you think they have greater entropy than still air, and are therefore highly probable under the 2nd Law; or have less, in which case they would seem to require as much explanation as the spontaneous appearance of life does, given the 2nd Law; or the same, in which case I’d like to know what you mean by “entropy”.

277. 277
keiths says:

Lizzie to Thomas2:

You misunderstand me: I think arguments from incredulity are perfectly valid. I do not know that there are no invisible pink unicorns, but that is an argument from incredulity. I see no reason to think there are, and it runs counter to my entire understanding of the way the world works.

Lizzie,

I think Thomas2 may be thinking of the logical fallacy known as the “argument from incredulity“.

Your argument about invisible pink unicorns isn’t an argument from incredulity, because you’re not saying “I don’t see how invisible pink unicorns could exist; therefore they don’t exist.” Rather, you’re saying “I see no evidence at all that invisible pink unicorns exist, so I have no reason to believe that they do.”

278. 278
cantor says:

You accused me earlier of being evasive, despite the fact that I had make my position very clear (it not being one of the ones you had offered).

To be clear, it took quite a few posts to get there. In fact, it took so many posts that your sidekick KS even accused me of being “oddly insistent”.

If a tornado can have less, more, or the stame entropy as still air, can you explain the conditions under which it would have:

1. less
2. more
3. the same.

It depends on the mass, the temperature, the temperature gradient, the pressure, the pressure gradient, the gravitational potential, etc etc etc of the still air and the tornado you are comparing.

In general I would think that a tornado has less entropy than the still air that existed minutes earlier at the same location. Is that what you were asking?

If so, I have no problem with that.

279. 279
cantor says:

request from the peanut gallery:

On further reflection, I think I will do that.

Your actual position is Option 2 (but for some inscrutable reason you seem reluctant to come out of the closet and own it).

And what both KF (a) and GS (b) seem to be saying is that unless there is

a) some pre-existing mechanism on the early barren lifeless planet which is capable of using incoming raw heat energy to do constructive work, OR

b) something other than raw heat energy coming in,

… then the transformation of the barren lifeless planet will not take place.

I will add a 3rd possibility:

c) some prexisting information encoded in the planet that could somehow facilitate the spontaneous creation of a mechanism as described in (a) above

280. 280
cantor says:

EBL @ 272 wrote:
you seem to think that tornadoes are perfectly possible under the 2nd Law, but life is not.

What did I say that caused you to infer this??

Did you pigeonhole me simply because I objected to KS’ immature and disrespectful treatment of Dr Sewell?

281. 281
scordova says:

Thomas2 and other newly arrived pro-ID and creationists at UD (not the old timers),

I’m a creationist, but I do not think evolution violates the 2nd law as stated in most textbooks. Below are the links of my arguments at UD. Out of respect for my colleague Dr. Sewell, I’m minimizing getting too involved in a shouting match since all the shouting was done last year.

In brief I showed how a tornado will REDUCE the entropy of a 747!

http://www.uncommondescent.com.....d-not-use/

http://www.uncommondescent.com.....se-part-1/

and finally

http://www.uncommondescent.com.....se-part-2/

The irony is that even though the Darwinists know I agreed with them, they couldn’t bring themselves to say. “Good job, Sal”. I can hardly post textbook equations without Darwinsits saying something derogatory….

I was an engineering grad student in statistical mechanics and thermodynamics last year, and thus this topic interested me, and I wrote on it from the perspective of student of statistical mechanics and thermodynamics.

That was 11 months ago. Everything that each side wanted to say has been pretty much said.

I fully support Dr. Sewell’s right to a fair hearing of his ideas, and though I vehemently disagreed with Dr. Sewell, I wouldn’t think to pull the sort of underhanded maneuvers that Nick Matzke pulled to impeded publication of Dr. Sewell’s claims. That was crossing boundaries and meddling in affairs Matzke had no business in.

I have remained mostly silent on these matters except now because I see there are new commenters that haven’t heard the news that some creationists actually agree with evolutionists on the 2nd law and its relation to ID.

This is an extremely challenging topic. Just follow the links of my treatment of Mike Elzinga’s concept test and the Purcell Pound experiment in the last link to get an idea of how circumspect we should all be on these difficult technical matters.

282. 282
kairosfocus says:

Cantor, a reasonable point. The issue is, not so much that unless there is X, but rather that Y the suggested alternative to X — where X is shown to be of adequate order — has neither empirical nor analytical warrant to be able to achieve the outcome. KF

283. 283
kairosfocus says:

F/N: I should not omit to note how, AF — closely associated with a slander that has been standing for months — is offended by my saying (and showing) that something is a mish-mash of red herrings and strawman arguments. KF

284. 284
kairosfocus says:

SC: The fundamental issue is whether, per relevant stat analysis it is reasonable that diffusion and the like be seen as credibly able to carry out constructive work ending up in FSCO/I. For good reason, the answer is, no. That is, we need to look at the underlying circumstance of the relevant law (Hence my marbles and pistons conceptual model, designed to help non-specialists get an idea of what is going on without drowning in the math.) When we do so, we see that the same reason why we have no reasonable expectation to see 500H or a similar special result from coin tossing, applies. Constructive work yielding something marked by FSCO/I, has just one empirically and analytically grounded known adequate cause — design. (Where, work is orderly, forced motion, F*dx and all that.) And given the close link to information involved, that is no surprise. But of course the same statistical analysis is not going to lock out logical possibilities absolutely, it works by a subtler point, the failure of lucky noise to appear or of blind search as a viable mechanism. Golf balls, as a practical matter, do not play themselves across 18 holes by logically possible but maximally implausible clusters of forces and circumstances. KF

285. 285
keiths says:

Hi Thomas2,

For your apparatus to work, the solar cell will have to power a heat pump, and the operational efficiency/inefficiency of the heat pump will provide the necessary compensation.

Yes, in the sense that the operation of the heat pump increases its own entropy and the entropy of the surroundings more than it decreases the entropy of the oxygen/carbon dioxide mixture.

For undirected (blind, mindless) evolution to work without violating the 2nd Law, natural selective processes (successful competition between organisms with differential functionality-selectivity-fecundity, simplifying and ignoring luck) will presumably provide the equivalent role of the heat pump.

Well, any physical process that causes a local decrease in entropy must simultaneously increase the entropy of the surroundings by an equal or greater amount, so in that sense any such process plays the role of the heat pump in my example.

Entropy (or X-entropy) will, however, be quantified by an appropriate measure of complexity, not functionality-selectivity-fecundity (since what we are concerned with here is the unplanned/unintelligent/mindless development of “organised”, or “specified”, complexity).

Well, apart from the fact that the concept of X-entropy doesn’t make sense, it is not inversely correlated with complexity. A chamber containing oxygen and carbon dioxide, with all of the oxygen on one side and all of the carbon dioxide on the other, will have low “oxygen-entropy” and low “carbon-dioxide-entropy”. But that doesn’t make it complex.

Thus, in order to demonstrate that undirected evolution works without violating the 2nd Law and that natural selective processes can indeed supply the role of a heat pump, it needs to be demonstrated empirically that there exists (on average) a significant positive correlation between appropriately quantified increases in functionality-selectivity-fecundity and appropriately quantified increases in complexity.

To show that undirected evolution doesn’t violate the second law, you need only show that the processes involved don’t violate the second law.

That’s one of the reasons I find these arguments so absurd. Creationists and IDers generally don’t seem to think that life itself violates the second law; they just think that evolution does. But evolution doesn’t require anything more than heritable variation with differential reproductive success. You get all of those with life itself! Why would those things magically start violating the second law merely because you’ve slapped the “evolution” label on them?

286. 286
keiths says:

Sal,

The irony is that even though the Darwinists know I agreed with them, they couldn’t bring themselves to say. “Good job, Sal”.

I’m not sure why you expect congratulations. Do you see me congratulating Lizzie for understanding the second law, or vice-versa?

If it makes you feel better, I will state that if you believe that evolution does not violate the second law, and it appears that you do, then I agree with you on that point.

You may even quote me on that, but don’t be an ass by quote-mining me as you so often do.

287. 287
cantor says:

I will add a 3rd possibility…

I will add a 4th possibility, which is anathema to MN’s:

d) there is an agent, undetectable to our science, acting at the micro level to cause otherwise improbable things to happen, without detectably violating any laws of physics, including the 2nd law.

288. 288
kairosfocus says:

Onlookers

Predictably, KS ignores that in the context where GS spoke in terms of X-entropy, he specifically highlighted that he was discussing situations dominated by probabilistic patterns leading to diffusion [and the like], as was cited TWICE above.

Next, his discussion of a case of coupling energy through energy conversion devices (heat pumps, solar panels) predictably side-steps the points that the mechanisms performing the coupling, energy conversion and work have to be accounted for, and that where such mechanisms exhibit FSCO/I they are not plausibly the product of diffusion and the like.

Constructive work leading to FSCO/I has to be explained and raw injection of energy and/or mass is not a reasonable explanation.

Back to KS’s red herrings and strawmen games to distract us from that pivotal point.

KF

289. 289
keiths says:

More huffing and puffing from KF.

Meanwhile, my simple 4-step argument goes unrebutted by KF or anyone else, to the amusement (or dismay) of the onlookers.

290. 290
CS3 says:

1. Does evolution violate the second law?

The answer to #1 is ‘no’, and compensation shows this.

If one considers the Second Law only applicable to thermal entropy, then I agree (with my earlier caveat about what is meant by “compensation” in this case).

If one considers the Second Law applicable to the improbability of organisms, as in Styer, etc., then compensation does not prove nor disprove anything, assuming organisms are not being imported through the boundary, because there is no valid conversion between organism complexity and thermal entropy.

It would, I think, take a lot of writing for me to adequately address the nuances of where I agree with you and where I would disagree with regard to all the implications of that comment. I think we have beaten this horse to death, so I am good with leaving off here, and letting others who read our comments, if any, draw their own conclusions based on our discussion as it currently stands.

291. 291
keiths says:

CS3,

Okay, but I urge you to keep thinking about it.

Particularly this part:

Do you see that the second law is as irrelevant to your doubts about evolution as the the first law would be to your doubts about gerbil-poofing?

gerbil-poofing

292. 292
kairosfocus says:

Onlookers, at this point KS is indulging the willfully continued misrepresentation, as his red herrings and strawmen tactics were corrected and the issue of explaining constructive work issuing in FSCO/I was put firmly on the table. Cf here at 227 above. If you look carefully, you will find that he simply does not face the issue that diffusion is not a reasonable explanation of such constructive work leading to functionally specific complex organisation, nor are similar forces associated with a trend of increased disorder. However, this behaviour is no surprise, it is habitual and a reflection of an ideological agenda; he rhetorically distorts, misrepresents and dismisses with cheap quips instead of soberly addressing issues on their genuine merits — the attempt to use freezing water to answer to explanation of organisation not order is a classic strawman. And when I identified and corrected it as a strawman, for example, he has gone on to dance all around and deny that he was properly corrected. It is the same that has led him and his ilk to dispute the clear evidence and analysis that points to the credible cause of finding a box of 500 coins, all H. And so on, for issue after issue. He then compounds all of this by seeking to make ad hominem talking points repeated drumbeat style against anyone who corrects him. This is likely to take in someone just glancing or who is not closely following up a matter that requires step by step attention. On the strength of such manipulative tactics he then hopes to get away with claiming a rhetorical victory for his agenda, but all along he has not soundly dealt with the matter on the merits. Here, again, the need to properly explain constructive work issuing not in mere order but complex organisation, where something like diffusion tends strongly to be a disorganising force. KF

293. 293
CS3 says:

scordova:

If you have not done so, I encourage you to at least read my comment 169. Whatever your position with regards to the Second Law, I think you will see that, in the Cornell paper, Sewell is merely addressing these papers on their own terms, and is right to point out the error of how Styer and Bunn compute a conversion between the “improbability of organisms” and thermal entropy. Perhaps you may feel that Styer and Bunn are making other errors too, not addressed by Sewell, but in any event I suspect you will agree that their methodology is not sound and should be challenged.

294. 294
Mung says:

Elizabeth Liddle:

You have confused “order” as in low entropy with “order” as in “not chaos”.

El oh El

There’s one for the ages.

295. 295
cantor says:

Consider the following:

If I were to randomly select a group of 75 different people from a roomful of 200 men and 100 women, what is the probability that the selected group would contain exactly 25 women?

Question: How many people contributing to this thread know how to do this computation using only the knowledge currently in your head? No Googling, no phone-a-friend, no ask-the-audience, no leafing through textbooks, etc.

296. 296
keiths says:

I could do it, but I don’t have the formula memorized, so I’d have to derive it.

What’s the relevance?

297. 297
cantor says:

292: I could do it

Go ahead and derive it while we watch.

298. 298
keiths says:

cantor,

You didn’t answer my question. What’s the relevance?

What does this have to do with Granville’s paper, the second law, and evolution?

299. 299
cantor says:

re: 294

Yeah, I thought that’s what would happen. That’s the last time you get the benefit of the doubt.

And by the way, have you forgotten? You’re supposed to stop commenting on my posts.

300. 300
keiths says:

cantor,

You’re asking people to jump through hoops for you, but you aren’t even willing to explain why your request is relevant?

And by the way, have you forgotten? You’re supposed to stop commenting on my posts.

Says the guy who addressed a comment to me just a few hours ago.

301. 301

keiths

Lizzie,

I think Thomas2 may be thinking of the logical fallacy known as the “argument from incredulity“.

Your argument about invisible pink unicorns isn’t an argument from incredulity, because you’re not saying “I don’t see how invisible pink unicorns could exist; therefore they don’t exist.” Rather, you’re saying “I see no evidence at all that invisible pink unicorns exist, so I have no reason to believe that they do.”

heh, I forgot it was a formal fallacy. Actually, I think it’s fine to say the first as well – if we didn’t do basic triage on batty ideas, we’d have no cognitive capacity to do anything useful! I should have said: I think arguments from incredulity are perfectly sensible, they just aren’t very compelling, which is why people say that “extraordinary claims require extraodinary evidence”.

So to address Thomas2’s point again: I agree that a healthy skepticism towards extraordinary claims is a good thing. But that’s different from saying; therefore it’s improbable, therefore it must violate the 2nd Law of thermodynamics!

302. 302

Mung:

Elizabeth Liddle:

You have confused “order” as in low entropy with “order” as in “not chaos”.

El oh El

There’s one for the ages.

I’m glad you like it, Mung. So would you like to apply it to my question as to whether a chaotic system like a tornado has more or less order-as-in-entropy than still air?

303. 303
keiths says:

Lizzie, to Mung:

I’m glad you like it, Mung. So would you like to apply it to my question as to whether a chaotic system like a tornado has more or less order-as-in-entropy than still air?

304. 304

cantor:

Consider the following:

If I were to randomly select a group of 75 different people from a roomful of 200 men and 100 women, what is the probability that the selected group would contain exactly 25 women?

Question: How many people contributing to this thread know how to do this computation using only the knowledge currently in your head? No Googling, no phone-a-friend, no ask-the-audience, no leafing through textbooks, etc.

I’ll have a go, though I might get it wrong, especially as it’s a limited pool so you have to take into account selection-without replacement, so I need the hypergeometric distribution rather than the binomial distribution.

Which I can’t remember.

OK.

Will have to think about it. I’m having a wisdom tooth out later this week, so it’ll be a great thing to try to figure out while it’s happening! I always like to have a working-memory demanding task to do while I’m at the dentist!

Thanks!

305. 305

It depends on the mass, the temperature, the temperature gradient, the pressure, the pressure gradient, the gravitational potential, etc etc etc of the still air and the tornado you are comparing.

yes indeed, but as I do not know of any tornado that has smaller temperature or pressure gradients than still air (how would such a thing be a tornado!), then I think it is a degree of understatement to say that:

In general I would think that a tornado has less entropy than the still air that existed minutes earlier at the same location.

!

And yes,

Is that what you were asking?

it was.

If so, I have no problem with that.

Excellent. So we agree that, given that tornadoes consist of systematically spirally rising moving air with steep pressure and temperature gradients (and I don’t know of any that don’t), that tornadoes have lower entropy than the still air that they formed from.

And we can also, I think, agree that as a result of that low entropy, they can do a spectacular amount of work on a town, even though it may not be work we want doing – elevate massive objects by hundreds of feet, for instance. So a large tornado represents a massive and spontaneous drop in entropy of the air of which it is composed.

It is a chaotic system of far reduced entropy as compared to the non-chaotic, higher entropy still air that it earlier was. (Mung will appreciate this part.)

And yet I don’t think that you would argue that tornadoes require a Designer in order that the 2nd Law be not violated.

So I take it you would agree that massive drops in entropy can occur spontaneously without the 2nd Law being violated?

Would you therefore agree that there is no reason to invoke the 2nd Law as a reason why life couldn not have occurred spontaneously? Given that the Earth is open to the sun on one side at all times, thus experiencing both temporal and spatial temperature gradients that can cause low entropy systems to develop regularly on its surface?

306. 306
keiths says:

Lizzie,

Will have to think about it. I’m having a wisdom tooth out later this week, so it’ll be a great thing to try to figure out while it’s happening! I always like to have a working-memory demanding task to do while I’m at the dentist!

Sorry about your tooth. I don’t want to deprive you of a welcome extraction distraction, so I won’t say too much, but there is an easier way of solving cantor’s problem that doesn’t require coming up with the distribution.

307. 307
kairosfocus says:

Dr Liddle:

I need to speak for record in correction. At this stage, not in expectation that you have the slightest inclination to do better than you have done with a sustained case of outright slander, but as a marker for record and at least the opportunity for the 1,800 daily onlookers now and future to see for themselves. In other words, I will not let your distortions and distractions stand without a testimony of correction.

Are you aware that tornadoes are examples of order, not complex functionally specific organisation? Namely, that they, like hurricanes are essentially vortices formed by fluid dynamics [in the general context of convection], thus by necessity similar to crystalisation?

Thus, that first, you are trying to substitute accounting for order tracing to mechanical necessity, for highly contingent complex functional organisation?

In short, again, you are misunderstanding and misrepresenting the design inference explanatory filter for the umpteenth time despite repeated corrections.

Next, you are on a red herring tangent regarding the relevant thermodynamics.

Had you seriously read my App 1 my always linked note, or the FYI-FTR that clips from it, you would observe that it has been openly and specifically discussed from years ago, how Clausius’ context for deriving a quantification of the second law starts with two subsystems transferring a quantity of heat d’Q, thus A and B , the latter at lower temp, suffer a reduction and a gain in entropy, the latter equaling or exceeding the former. Algebraically, this is because d’Q/T_B is greater than the similar ratio for A, as T_B is lower.

So, first, the insinuation you are hinting at above, that we are denying that local entropy reductions occur is a strawman fallacy. (Do I need to keep on talking about farragoes of red herrings and strawmen? do you remember that you operate an entire objecting blog under the theme that you expect others to consider how THEY may be in error? Is not the sauce you intend for the Gander not also sauce suitable for the Goose? Or, is all of this in the end, an exercise of — pardon bluntness — arrogance by pontification on superficiality on your part? What the author of Alcibiades has Socrates describe as the ignorance that has conceit of knowledge? Pardon if you find such offensive, but first consider the impact of the slander you have harboured then denied then tried to justify then try to ignore, for months, in terms of an index of how you have been operating. And then consider the matters on their actual merits not agenda serving red herrings and strawmen. Remember, you have donned the lab coat as a champion of evolutionary materialist orthodoxy, so you bear a degree of responsibility that an ordinary commenter does not.)

It seems I will need to use bold block capitals, to get your attention to a pivotal point.

FYI, WHAT IS BEING HIGHLIGHTED IS THE UNDERLYING PHYSICAL STATISTICAL MICROSCOPIC PHENOMENA THAT LEAD TO THE RESULT AND THEIR IMPLICATIONS FOR THE IMPLAUSIBILITY OF FORCES OF DIFFUSION OR THE LIKE PHENOMENA CARRYING OUT THE CONSTRUCTIVE WORK LEADING TO FSCO/I.

That is, entropy is a metric of a state-linked property of bodies and systems, effectively a measure of the number of ways that mass and energy at relevant micro level may be arranged, subject to the gross scale (macro or lab) constraints acting, as say Boltzmann’s S = k* log W, shows directly, W being number of ways and k a constant with appropriate units and scale. S, then, is a measure, in information terms of the average missing information on the microstate given the macro state. Or, it can be viewed as a metric based on the number of unanswered yes/no questions to specify the micro given the macro. As Jaynes, Robertson, Brillouin and others have pointed out, had there been a knowledge of the micro state, work could be extracted. Szilard’s explanation of Maxwell’s demon is a good example on this.

One consequence of this, is that under circumstances where energy flows from A to B, B naturally tends to dump some of the energy into modes where the number of ways that energy and mass at micro level may be arranged at micro level rises. Its entropy strongly tends to rise.

Now, under certain circumstances, heat (energy moving by way of radiation, conduction, convection, per temp difference . . . itself in many respects and cases by means of a kind of diffusion) flows may be coupled to B, and used to perform ordered forced motion on D, that is shaft work. But this comes at a price, sufficient waste heat is transferred to a heat sink C, that the entropy sums add up appropriately. (This, too was discussed.)

Shaft work, properly directed, is a common means of performing constructive work issuing in FSCO/I, e.g. typing this comment in ASCII coded English. Such is an instance of the universal, habitual experience of FSCO/I being observed to come from design.

And of course a tornado or hurricane, or just the wind system, are examples of naturally occurring heat engines driven by small to planetary scale convection currents. (But these things do not directly perform constructive work leading to creation of FSCO/I. For that to happen, something like a windmill needs to be built and its resulting shaft work needs to be intelligently directed, in principle it could build a jumbo jet from parts.)

Now, the problem GS addressed is clearly that it is being improperly suggested in the name of thermodynamics allows, that diffusion and similar dispersive forces can reasonably be expected, can be feasibly observed, to spontaneously perform FSCO/I creating constructive work. Such is comparable, in Shapiro’s terms, to expecting a cluster of wind, tornados, earthquakes and the like to spontaneously play a golf ball through an 18 hole course. In Hoyle’s terms, it is like expecting to be able to see a tornado spontaneously building a jumbo jet out of spare parts in a junkyard.

Let me put GS in his own words, from some years ago, on second thoughts on the second law:

. . . The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid is, that is what the laws of probability predict when DIFFUSION alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur.

The discovery that life on Earth developed through evolutionary “steps,” coupled with the observation that mutations and natural selection — like other natural forces — can cause (minor) change, is widely accepted in the scientific world as proof that natural selection — alone among all natural forces — can create order out of disorder, and even design human brains, with human consciousness. Only the layman seems to see the problem with this logic. In a recent Mathematical Intelligencer article [“A Mathematician’s View of Evolution,” The Mathematical Intelligencer 22, number 4, 5-7, 2000] I asserted that the idea that the four fundamental forces of physics alone could rearrange the fundamental particles of Nature into spaceships, nuclear power plants, and computers, connected to laser printers, CRTs, keyboards and the Internet, appears to violate the second law of thermodynamics in a spectacular way.1 . . . .

What happens in a[n isolated] system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. As I wrote in “Can ANYTHING Happen in an Open System?”, “order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door…. If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth’s atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here.” Evolution is a movie running backward, that is what makes it special.

THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn’t, that atoms would rearrange themselves into spaceships and computers and TV sets . . . [NB: Emphases added. I have also substituted in isolated system terminology as GS uses a different terminology.]

That is GS’ essential case, and he is right.

Confronted with a box of 500 fair coins all H, there was a huge argument in the teeth of blatant and simply shown evidence of overwhelming improbability of such occurring by blind chance tossing on the gamut of our solar system, i.e. empirically unobservable by blind chance on that gamut, where there is also a reasonable possibility that such may occur by choice. So choice is a practically certain explanation. In red herring and strawman tactic distraction, much was made for hundreds of posts, over how it is logically possible for 500 H to occur so we should not be surprised to see it by chance just like any other single tossed pattern.

Rubbish, by overwhelming improbability, the dominant cluster of something near 50-50 H/T in no particular order will reliably be observed on tossing fair coins, to the point of practical certainty.

Just so, it is practically certain that diffusion and the like will disperse and disorder rather than carry out constructive work issuing in FSCO/I, in Darwin’s warm little pond or the like.

What is driving this refusal to see the obvious, is that to accept the verdict of the overwhelming statistical weight of disordered patterns driven by diffusion etc, would open a door you and others demand to keep bolted, locked, barred and padlocked at any cost: you absolutely refuse to entertain the possibility that FSCO/I in the living cell could have originated by intelligently directed choice contingency, i.e. design. So any and every argument that to the eye of the Darwinist faithful or those able to be influenced by such, is trotted out, dressed up in a lab coat and presented with an air of confidence.

Red herrings, led away to strawmen soaked in subtle or blatant ad hominems and ignited to poison, polarise, cloud and confuse the atmosphere. All in defence of mind closing, stubbornly insisted on evolutionary materialist a prioris that in the end are demanding under the false cover of a lab coat, that we accept the explanation that a jumbo jet has been formed by a tornado in a junkyard to the alternative that the jet is an obvious constructed artifact best explained on intelligently directed construction.

Or, if you will, since this post could conceivably be produced by noise on the internet tracing to diffusion and similar forces, we must lock and bar the door to the explanation that it is the product of art, evidenced by the reliable signature of art, FSCO/I.

In the end, Johnson’s rebuke originally given to Lewontin, is apt:

For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”

. . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

KF

308. 308
kairosfocus says:

F/N: Let me try a diagram using textual features:

Heat transfer in Isol system:

|| A (at T_a) –> d’Q –> B (at T_b) ||

dS >/= (-d’Q/T_a) + (+d’Q/T_b), Lex 2, Th

Heat engine, leaving off the isolation of the whole:

A –> d’Q_a –> B’ =====> D (shaft work)

Where also, B’ –> d’Q_b –> C, heat disposal to a heat sink

Where of course the sum of heat inflows d’Q_a will equal the sum of rise in internal energy of B dU_b, work done on D, dW, and heat transferred onwards to C, d’Q_b.

The pivotal questions are, the performance of constructive work by dW, the possibility that this results in FSCO/I, the possibility that B itself shows FSCO/I enabling it to couple energy and do shaft work, and the suggested ideas that d’Q’s can spontaneously lead to shaft work constructing FSCO/I as an informational free lunch.

By overwhelming statistical weights of accessible microstates consistent with the gross flows outlined, this is not credibly observable on the gamut of our solar system or the observed cosmos.

There is but one empirically, reliably observed source for FSCO/I, design. The analysis on statistical weights of microstates and random forces such as diffusion and the like, shows why.

And, this is the case that is being diverted from through red herrings and strawmen. Multiplied by attempts at Alinskyite ridicule, rather than sober assessment of the merits.

KF

309. 309

KF:

Are you aware that tornadoes are examples of order, not complex functionally specific organisation?

I am aware that they are examples of spontaneously generated low entropy systems.

Yet the 2nd Law holds; this is because low entropy systems can be generated spontaneously in local systems, according to the 2nd Law, if work is done on it by its surroundings, which, as a result increase in entropy “in compensation”, as it were.

Therefore it is simply not true to say that life cannot emerge spontaneously because life is a low entropy system that is forbidden under the 2nd Law. So Granville’s argument is incorrect.

If his point is that “complex specified information” cannot arise spontaneously, as you and Dembski claim, then fine, but the 2nd Law says nothing about whether “complex specified information” can arise spontaneously.

Not surprisingly, as it has proved impossible to define” complex specified information” in such a way that it is computable. I know you disagree, but you have not, at any rate to my satisfaction, demonstrated how you compute the probability under the null of any relevant postulated mechanism.

310. 310
cantor says:

So I take it you would agree that massive drops in entropy can occur spontaneously without the 2nd Law being violated?

Of course I would. Did I post something in this thread to cause you to infer otherwise?

Would you therefore agree that there is no reason to invoke the 2nd Law as a reason why life couldn not have occurred spontaneously?

Not without further stipulation.

311. 311
kairosfocus says:

EL, wrong again. Lucky noise issuing in FSCO/I is not consistent with the overwhelming tide of spontaneity in microscopic events. Forces of diffusion and the like will reliably not spontaneously create order once we are above the level of high probability of fluctuations, much less complex functionally specific organisation. Fr essentially the same reasons why reliably tossing 500 coins will not on the gamut of our solar system, create a case with 500 H. Similarly, on dropping 100 darts scattered across a bell-chart with the peak 0.4 m high and the width reasonably in scale, we will with practical certainty not see a dart-hit between 5 and 6 SD away from the mean, another example that is clear but which you refuse to acknowledge the force of. You are refusing to acknowledge that which is well warranted, as it does not fit your ideological agenda. In physical terms, mere logical possibility is not enough, there must be sufficient of resources and opportunities that we have a reasonable expectation to observe a stochastic outcome. But by the nature of the case for FSCO/I that is simply not present on the gamut of our solar system or the observed cosmos. And this is the very same statistical basis for the second law. KF

312. 312

Cantor:

Not without further stipulation.

Well, stipulate away, then 🙂

I’m all ears.

313. 313

KF:

Until you can show me what your probabilities are the probabilities of, under what conditions I cannot evaluate your argument.

Many arrangements that are improbable under one set of conditions are highly probable under another set. A human being moving things around is one is one, but not the only one.

And a human being increases in entropy as a result of doing the moving.

314. 314
Joe says:

Would you therefore agree that there is no reason to invoke the 2nd Law as a reason why life couldn not have occurred spontaneously?

LoL! That premise has more problems than just the 2nd law.

And why is it that materialists cannot produce the probabilities for their position? Why is it up to us?

Does anyone else see the problem with that?

315. 315
PaV says:

Isn’t there a simple proof that the 2nd LoT is violated in the case of organic beings?

Here’s my simple proof: “Remember, man, that thou art dust, and unto dust you shall return.”

Think through the logic if you like.

316. 316
kairosfocus says:

EL: that’s a threadbare excuse. The case of 500H coins WAS one in which the probabilities are indeed calculable, and the underlying point, from sampling theory is quite evident. It is the sampling challenge that dominates the result and is decisive. You know or should know full well that a blind sample of moderate scope of a large domain of possibilities will reflect the bulk patterns of the distribution. I used the darts and charts example to illustrate that. YOU STILL FOUND EXCUSES TO DUCK, DODGE AND DIVERT. You thus convinced me that you are not being reasonable, but pushing an ideology; on top of hosting, denying then defending slander. The basic point is, that it is well known that diffusion type processes access a large array of possible states in no particular order so once there is sufficient freedom for relevant particles in an initially relatively orderly pattern to interact through random walks, the difference in statistical weight of clustered vs scattered states pushes the system strongly to the scattered states. If you bothered to read the already linked discussions you would know why. That is the essential message of the second law, and it is why the same patterns are relevant to open systems, indeed in such systems, absent coupling of inputs to work producing energy conversion devices, the injection of energy will tend to increase the number of ways that mass and energy can be arranged at micro level, sharply. Thus, increasing entropy. The earth is such an importer of energy, and there is a tendency for entropy of systems to rise as a result. When work producing energy conversion devices produce constructive work issuing in FSCO/I [notice, organisation not mere order], invariably there is in our observation a plan, and a system to give effect to it — no surprise FSCO/I rich configs are very rare in spaces of possible configs. Often such systems themselves exhibit FSCO/I, and there is no good reason to imagine that such could spontaneously arise from diffusion and the like. As has been pointed out and explained several times, no need to do so again. But on track record, no evidence that does not sit well with your ideology will ever find your approval (one can always toss up a selectively hyperspeptical objection), so the point here is to note that ideology driven attitude problem and alert the onlooker to it. KF

PS: Above you got the matter exactly the wrong way around. It is on fundamentally thermodynamics considerations that it was recognised that it passed empirical plausibility for FSCO/I to arise by spontaneous action of diffusion and related forces, think about a Welcome to Wales sign arising spontaneously from an avalanche, logically possible but so overwhelmed by the clusters of other possibilities that this is a practical impossibility, an unobservable outcome on the gamut of our solar system. Since there is a link to the more broadly familiar information concepts, this came later, after the 1984 TMLO.

317. 317
PaV says:

Elizabeth:

Until you can show me what your probabilities are the probabilities of, under what conditions I cannot evaluate your argument.

This is equivalent to the same argument made against Dembski’s CSI, but in a slightly different form. When it comes to Dembski, the Darwinists say: “What’s the probability distribution involved? It’s not a ‘uniform probability distribution.'”

And here, it’s: “Well what is mechanism at work which brings about these probabilities?”

In my view, this is but feigned ignorance.

Isn’t that what IDists are accused of, and ‘argument from ignorance’? Isn’t that what you’re holding onto here?

318. 318

Kairosfocus: please stop berating me for one moment and listen to what I am asking you here. You say:

EL: that’s a threadbare excuse. The case of 500H coins WAS one in which the probabilities are indeed calculable,

YES, indeed they are

and the underlying point, from sampling theory is quite evident. It is the sampling challenge that dominates the result and is decisive. You know or should know full well that a blind sample of moderate scope of a large domain of possibilities will reflect the bulk patterns of the distribution.

Exactly. I do not, and never have, disputed this. Clearly if you know the distribution of your expected values under a specific hypothesis (e.g.fair coins, fairly tossed) you can rule out that hypothesis as the explanation for your observation if what you observe has extremely low predicted frequency under that null.

What I keep asking you, and you seem not to read my posts far enough to take this in, because you keep addresing a different point is:

How do you compute the expected frequency distribution under an unknown null?

Or, if you prefer this equivalent question:

How can you tell the probability of a pattern from the pattern alone?

319. 319
cantor says:

EBL @ 308 wrote:

Well, stipulate away, then 🙂

I’m all ears.

I have a hermetically sealed vial containing all, and only, the chemical elements (in sufficient quantity) to form a living amoeba.

I put that vial out in the sun at 11am and sit watching while I sip my iced tea.

Yes or No (I don’t need a long explanation, just yes or no): Is it valid to invoke the 2nd Law as a reason why a living amoeba will not form spontaneously in that sealed vial sometime before noon?

320. 320

PaV

This is equivalent to the same argument made against Dembski’s CSI, but in a slightly different form. When it comes to Dembski, the Darwinists say: “What’s the probability distribution involved? It’s not a ‘uniform probability distribution.’”

And here, it’s: “Well what is mechanism at work which brings about these probabilities?”

You are absolutely right: it is the same counterargument, because Granville’s argument is really a restatement of Dembski’s (and has nothing to do with the 2nd Law!)

In my view, this is but feigned ignorance.

Isn’t that what IDists are accused of, and ‘argument from ignorance’? Isn’t that what you’re holding onto here?

Nobody on the “evolution” side is saying: “we don’t know what the likelihood is, therefore it is likely”. All we are saying is that we don’t know what the likelihood is, but we know that many of the mechanisms proposed have been tested, therefore we cannot infer design.

It is Dembski et al who are saying “we do know what the likelihood is, and it is too small to be plausible, therefore we can infer design.

Evolutionary theory does not rule out Design.
ID rules it in.

The positions are not symmetrical.

321. 321

cantor:

I have a hermetically sealed vial containing all, and only, the chemical elements (in sufficient quantity) to form a living amoeba.

I put that vial out in the sun at 11am and sit watching while I sip my iced tea.

Yes or No (I don’t need a long explanation, just yes or no): Is it valid to invoke the 2nd Law as a reason why a living amoeba will not form spontaneously in that sealed vial sometime before noon?

No.

322. 322
cantor says:

KS@300: You’re asking people to jump through hoops for you

Participation is voluntary. If you’re not interested or (more likely, don’t have a clue), then get lost.

323. 323
cantor says:

EBL @ 321 wrote:
No

Very well then. Whatever 2nd law it is that makes your answer correct, “I would therefore agree that there is no reason to invoke that 2nd Law as a reason why life could not have occurred spontaneously”.

324. 324
keiths says:

cantor,

Participation is voluntary. If you’re not interested or (more likely, don’t have a clue), then get lost.

Says Mr. “I care about respect and manners”. 🙂

325. 325

Very well then. Whatever 2nd law it is that makes your answer correct, “I would therefore agree that there is no reason to invoke that 2nd Law as a reason why life could not have occurred spontaneously”.

Well, I don’t quite understand your reasoning, but I’m glad we agree. I was referring to the 2nd Law of thermodynamics.

I assumed you were too.

326. 326
Alan Fox says:

I’ll indulge cantor, though I don’t have any math training or expertise beyond school.

If I were to randomly select a group of 75 different people from a roomful of 200 men and 100 women, what is the probability that the selected group would contain exactly 25 women?

I am assuming you do it sequentially, so in the first selection event you have a 0.67 probability of finding a man and a .33 probability of finding a woman.

Each subsequent selection will be affected depending on whether a man or woman is selected in the previous selection event. If the first selection was a man. The second selection probabilities are 199/299 andfor a woman 100/299 and so on.

Now, what’s the relevance?

327. 327
cantor says:

@324: Says Mr. “I care about respect and manners”.

You can dish it out, but you can’t take it.

328. 328
Thomas2 says:

EL (at 271):

Thanks for the clarification – I had indeed read your post as pretty well as keiths suggests (at 273).

(My prior experience of the expression “argument from incredulity” has been only in hostile reference to ID arguments generally; in such cases the charge is usually an evasion of a proper counter-argument on the part of the objector, and in turn this objection opens the objector up to the counter charges I suggested – that the claims of Darwinian evolution demand a credulous and uncritical acquiescence, [somewhat Emperor’s new clothes-like], rather than healthy critical scrutiny like any other scientific proposal. I am glad that this was not your meaning.)

Perhaps the pink unicorn example, and Granville Sewell’s last argument, might be understood to be something like an “argument from inconsistency” in your analysis?

329. 329
Thomas2 says:

ks (at 273):

Yes, that’s indeed how I’d read it.

330. 330
keiths says:

cantor,

You can dish it out, but you can’t take it.

Oh, I’m not complaining. Just pointing out the hypocrisy.

Carry on.

331. 331
Thomas2 says:

scordova (at 277):

On the face of it mindless (Darwinian) evolution would very obviously appear to violate statistical, chemical, information-based or diffusion versions the 2nd Law: however, actually demonstrating this in a way which connects with proposed evolutionary processes seems rather tricky to me, so I’m currently uncommitted on the issue (though doubtful, especially in connection in pre-biotic scenarios).

To my mind the issue needs proper process-measurable demonstration either way. Having appreciatevely followed Granville Sewell’s articles and posts on this issue, I am becoming a little concerned that there is a disconnect between generalised theoretical treatments and actual proposed Darwinian processes, thus to date I remain unconvinced: tornado illustrations may well correlated to a degree with pre-biotic scenarios, but self-replicationg systems subject to differential selection/survival appear to me to be a different kind of case, requiring a more focussed treatment, and/or appropriate empirical investigation.

This present topic thread is primarily concerned with one particular issue in the context of the 2nd Law (even if it is the key issue) – compensation arguments – which is what grabbed my attention; but I shall certainly now look up your previous discussions on the 2nd Law at some stage.

332. 332
cantor says:

There’s no hypocrisy. Just communicating with you on the level you seem to prefer.

333. 333
cantor says:

AF@326: I am assuming you do it sequentially, so in the first selection event you have a 0.67 probability of finding a man and a .33 probability of finding a woman.

Each subsequent selection will be affected depending on whether a man or woman is selected in the previous selection event. If the first selection was a man. The second selection probabilities are 199/299 andfor a woman 100/299 and so on.

Have you thought about what it would take to actually get an answer doing it this way? Try it.

Now, what’s the relevance?

The fact that people even ask that question makes it relevant.
.
Generative process X: randomly select a group of 75 different people from a roomful of 200 men and 100 women.

Macrostate1: the group contains exactly 25 women.

What is the probability that X will generate Macrostate1?

334. 334
PaV says:

Ellizabeth:

Thank you for a straight-forward and honest answer.

However,….

All we are saying is that we don’t know what the likelihood is, but we know that many of the mechanisms proposed have been tested, therefore we cannot infer design.

What mechanisms have been tested? And how do they rule out the design inference? I’m completely lost here.

It is Dembski et al who are saying “we do know what the likelihood is, and it is too small to be plausible, therefore we can infer design.

And this gets us back to ‘probability distributions,’ doesn’t it?

I’ve read that when mathematicians don’t know what the probability distribution is for a particular set of events, that they assume the uniform probability distribution.

So how is Dembski’s use of the UPD wrong, then?

And doesn’t your argument simply become: “Well, we don’t know what the probability distribution is, and we’ll probably never know what it is, so ID is completely useless”?

This is what I mean about your use of an “argument from ignorance.” Heavens, we know that nucleotide bases don’t show any chemical preference in bonding; i.e., per Meyer’s “Signature in the Cell,” that there is very little difference between the frequencies of A,T,C and G. A uniform PD is certainly not that far off the mark. And given the fact that probability distributions are developed to come up with better evaluations of various kinds of probabilistic events, illustrates that there is no mathematical ‘precision’ when dealing with such events. Nevertheless, worthwhile answers are arrived at.

Heavens, just think about confidence intervals when it comes to standard biological field experiments. We’re always dealing with some margin of error.

The amount of error in using a uniform probability distribution to analyze DNA code, is very minor. Holding onto such picayune uncertainties is, IMHO, “feigned ignorance.”

You know: “You can lead a horse to water, but you can’t make it drink.”

Evolutionary theory does not rule out Design.
ID rules it in.

No, evolutionary theory calls ID “creationism” and says it’s no more than religion.

[You may be this open-minded, but there’s a whole hosts of others who would willingly put a knife into an IDist if the law allowed.]

Indeed, ID not only “rules in” ‘design’, but is based on the ‘design inference.’ And, as Meyer points out in Signature in the Cell, it has the most explanatory power among any suggestions regarding OOL. And now in Darwin’s Doubt, Meyer points out that it is the best explanation for the Cambrian Explosion.

And your reason for rejecting this ‘inference’ is simply because we don’t know what probability distribution is actually in play? Again, “feigned ignorance.”

Let’s face it, when we’re at the extremal end of the tail of ANY probability distribution, they all LOOK THE SAME! I.e., it looks like a flat line. So who cares what it looks like when it starts to spike up! It’s really unimportant. The total probability density has to add up to 1.0—who cares about how, exactly, it gets to 1.0??

335. 335

Ah, cantor, you raise a point I think is crucial (and it’s a point I keep banging on about)!:

What is the probability that X will generate Macrostate1?

What the answer will tell you, of course, is the probability of “macrostate 1” given random selection.

It’s SO important not to divorce a probability estimate from the generative process that is assumed to have generated it.

If we observe an arrangement that is improbable under Generative process X, then we can confidently reject Generative process X.

But that does NOT mean that macrostate1 is also improbable under generative processes Y and Z, as I’m sure you will agree.

And if Y and Z do not violate the 2nd Law of Thermodynamics, and macrostate1 is probable given generative processes Y and Z, generative process Y and Z will be a contenders as the cause of macrostate1.

Do we agree thus far?

336. 336
keiths says:

PaV,

So how is Dembski’s use of the UPD [uniform probability distribution] wrong, then?

We know that the UPD doesn’t apply to evolution.

Simple example: Under Darwinian evolution, are the genes for black polar bears equiprobable with the genes for white polar bears?

Obviously not. Selection is non-random.

337. 337

PaV :

What mechanisms have been tested?

Natural selection, some OoL hypotheses, mechanisms of heritable variance generation.

They don’t! That’s my point!

That’s why I said

therefore we cannot infer design.

I did not say: “therefore we can rule out design”. What I’m saying is that ID arguments that say : “Evolution is too improbable, therefore design” are invalid. Equally so is the argument that says “evolution perfectly possible thefore no design”.

PaV

I’ve read that when mathematicians don’t know what the probability distribution is for a particular set of events, that they assume the uniform probability distribution.

So how is Dembski’s use of the UPD wrong, then?

The UPD is fine, it’s computing the probability distribution that is problematic. And mathematicians may well “assume a uniform probability distribution” but empirical scientists certainly don’t and shouldn’t! I get cross with my students when they assume normality without testing their data to see if it justifies that assumption!

And, specifically, rejecting a null without having computed the probability under that null is a prime way for papers to get rejected at peer-review! Which is why so often these days people use bootstrap algorithms, so as to generate a realistic probability distribution for their null, rather than assuming a normal distribution.

And doesn’t your argument simply become: “Well, we don’t know what the probability distribution is, and we’ll probably never know what it is, so ID is completely useless”?

I do think it’s fairly useless in its current form, and will remain so until the ID community abandons this insistence that they can detect ID from a pattern only, in the absence of any specific hypothesis about how that pattern might have been generated.

Dembski’s null hypothesis testing, on which his CSI is based, is indeed, in my view, useless.

Other approaches are perfectly possible, in my view.

The amount of error in using a uniform probability distribution to analyze DNA code, is very minor. Holding onto such picayune uncertainties is, IMHO, “feigned ignorance.”

I don’t have any problem in assuming a uniform distribution for codons, for instance, and in any case, the true distribution can be computed from the data.

It’s assuming “random independent draw” as the null that is the problem.

No, evolutionary theory calls ID “creationism” and says it’s no more than religion.

Some “evolutionists” may,but that is not a conclusion that can be drawn from evolutionary theory. On the other hand it might well be drawn from the posts on this blog, where atheism is often equated with “Darwinism”!

Indeed, ID not only “rules in” ‘design’, but is based on the ‘design inference.’ And, as Meyer points out in Signature in the Cell, it has the most explanatory power among any suggestions regarding OOL. And now in Darwin’s Doubt, Meyer points out that it is the best explanation for the Cambrian Explosion.

Well, I disagree, but I will grant you that the “Irreducible Complexity” argument has a smidgeon more going for it thatn CSI. At least it seeks to reject a null hypothesis that is actually about evolution. I haven’t got to the end of Darwin’s Doubt yet, but I am certainly not impressed so far!

Anyway, good to talk to you again! Thanks for your response, even if we disagree!

338. 338
cantor says:

EBL @ 335 wrote:
Ah, cantor, you raise a point I think is crucial (and it’s a point I keep banging on about)!:

What is the probability that X will generate Macrostate1?

What the answer will tell you, of course, is the probability of “macrostate 1? given random selection.

It’s SO important not to divorce a probability estimate from the generative process that is assumed to have generated it.

If we observe an arrangement that is improbable under Generative process X, then we can confidently reject Generative process X.

But that does NOT mean that macrostate1 is also improbable under generative processes Y and Z, as I’m sure you will agree.

We agree up to this point. But stand down please, you are getting way ahead of me. And since you seem to have some influence over KS, would you please ask him to stop pestering me about whether or not my post was “relevant”.

339. 339
Alan Fox says:

Have you thought about what it would take to actually get an answer doing it this way? Try it.

Let’s make it clearer. The problem can be illustrated as two different coloured balls in a bag and using a scoop to remove a representative sample. Given a homogeneous mixture and a representative sample, the likelihood is that the sample will have the same ratio of black to red as the contents of the bag. Similarly a sample of two isotopes of a gas in a container. Sample it and you will find the same ratio. So given a homogenous mix and a representative sample, approaching 1.

340. 340
keiths says:

cantor,

And since you seem to have some influence over KS, would you please ask him to stop pestering me about whether or not my post was “relevant”.

Get your facts straight. The last time I “pestered” you about the relevance of your example was 17 hours ago.

341. 341
keiths says:

Alan,

It’s true that you’re more likely to get a sample with the same ratio as the population than you are to get one that is off by one, or one that is off by two, etc.

However, that doesn’t mean that the probability of getting a sample with the same ratio as the population is close to 1. Far from it.

342. 342
cantor says:

KS@341:
It’s true that you’re more likely to get a sample with the same ratio as the population than you are to get one that is off by one, or one that is off by two, etc.

However, that doesn’t mean that the probability of getting a sample with the same ratio as the population is close to 1. Far from it.

KS is correct. (Did I say that??)

Now, what is the probability that X will generate Macrostate1?

343. 343
cantor says:

Now, what is the probability that X will generate Macrostate1?

No cheating.

344. 344
keiths says:

cantor,

Now, what is the probability that X will generate Macrostate1?

345. 345
Joe says:

Under Darwinian evolution, are the genes for black polar bears equiprobable with the genes for white polar bears?

Darwinian evolution can’t explain bears. That is the whole point.

Also polar bears’ skin is black. The fur is transparent and just appears white. Do geneticists even know what genes code for that feature?

346. 346
keiths says:

Broken link in my previous comment.

347. 347
kairosfocus says:

Dr Liddle: It has been quote clear for some time that you have been pushing ideological talking points backed by willful obtuseness and backed up by enabling slander. Sadly, all of that has been repeatedly shown; it is not mere empty berating. Perhaps, you may now be willing to make a positive change and also correct what you have been enabling, which would be welcome. If so, show it be deeds, not the projection of a smiley-faced rhetorical stance that covers enabling of slander hosted on your blog. In the meanwhile, with all due respect, I can only draw the conclusion that you are an ideologue here to push an agenda; not a genuinely fair participant in a give and take discussion on the merits. Your sustained actions in the teeth of many opportunities to do better have taken that option off the table. KF

348. 348
cantor says:

re: KS @ 344

Go ahead. I’ll be away for a couple hours.

349. 349
keiths says:

kairosfocus,

350. 350
keiths says:

OK, Lizzie, close your eyes so we don’t spoil your dentist visit.

Now, what is the probability that X will generate Macrostate1?

…with ‘X’ and ‘Macrostate1’ defined as follows:

Generative process X: randomly select a group of 75 different people from a roomful of 200 men and 100 women.

Macrostate1: the group contains exactly 25 women.

The probability of getting Macrostate1 is equal to the number of ways of getting Macrostate1 divided by the number of ways of selecting 75 different people.

The number of ways of selecting 75 different people from a group of 300 is simply Comb(300,75) where Comb(x,y) is given by the ubiquitous formula x!/(y!(x – y)!.

Meanwhile, the number of ways of getting Macrostate1 (exactly 25 women) is equal to the number of distinct ways of picking 25 women from among the 100 women in the population, times the number of distinct ways of picking 50 men from among the 200 men in the population. More compactly, Comb(100,25) x Comb(200,50).

So the probability of Macrostate1 is equal to Comb(100,25) x Comb(200, 50) / Comb(300,75).

Plug in the numbers (thank God for calculators, so to speak) and if I’ve done the calculation right, you get

2.42519269e+23 x 4.53858377e+47 / 9.79582752e+71

or a probability of a little over 11%.

OK, Lizzie, you can open your eyes now. 🙂

351. 351
keiths says:

Okay, cantor, I jumped through your flaming hoops.

Now will you tell us how any of this is relevant to evolution and the second law?

352. 352
PaV says:

Elizabeth:

It’s assuming “random independent draw” as the null that is the problem.

What is the “null” that neo-Darwinism assumes? I would think it would be exactly the same.

I do think it’s fairly useless in its current form, and will remain so until the ID community abandons this insistence that they can detect ID from a pattern only, in the absence of any specific hypothesis about how that pattern might have been generated.

But we have a specific hypothesis: a designer is responsible for the ‘pattern.’

Likewise, neo-Darwinism tells us that ‘pattern’ came about via chance happeninings. If, then, you want to raise the notion that some ‘non-random’ agency is at work, ‘helping’ to generate the ‘pattern,’ then the shoe is on the other foot: what evidence do you have that such an agency exists? Or are you just assuming some unsee-able, unknowable agency, which, then, becomes a sort of “Darwin-of-the-gaps” strategy?

The most important answer here is to the first question: what does neo-Darwinism accept as the null hypothesis?

353. 353
PaV says:

keiths:

We know that the UPD doesn’t apply to evolution.

Simple example: Under Darwinian evolution, are the genes for black polar bears equiprobable with the genes for white polar bears?

Obviously not. Selection is non-random.

And what about Kimura’s “neutral theory”, or, what about “evo-devo”, where NS is only minimally important, whereas neutral drift is considered the more important phenomena; then, you’re dealing with a UPD are you not?

So how did life get started if selection is impossible prior to the replication made possible by it?

354. 354
PaV says:

Since no one has commented re my earlier post, it would appear that I will have to be more direct in what I’m saying.

“Remember, man, that thou art dust, and unto dust you shall return.”

OK. What does this mean in the context of the 2LoT?

This. For human life to continue, work must be done so as to not allow the 2nd LoT to increase our ‘entropy’; that is, to “rust.” We call it basal metabolism, which keeps our body temperature around 98.7 F. When we die, our body temperature assumes the ambient temperature. And, if left alone, our bodies begin to rot. And, if left alone long enough—let’s say, buried underground—then only the skeleton will remain, the other soft tissue becoming ‘dust.’

So, obviously, ‘life’ and the 2nd LoT are opposed to one another. And for ‘life’ to have formed, then some kind of ‘work’ had to be done so as to overcome the effects of the 2nd Law. Who did that work? What did that work? Without that work, our basal temperature would be ambient.

This should be proof enough that the evolution of life—here we’re talking OOL—and the 2nd Law, are opposites.

You can nitpick all you want, but facts are facts. And when you die, the 2nd Law takes over. The rest is easily inferred.

355. 355

KF @ 347:

I told you already that I have said all I have to say on that matter. We disagree, as often happens.

The best I can do in line with my own ethical judgment is to repeat my open invitation to post your own views on my blog.

I remain in hope that we can eventually put this disagreement behind us, even if we cannot agree.

356. 356
keiths says:

PaV,

Evolution is not purely random. It includes randomness, but it also has non-random components.

I explained this to kairosfocus a few days ago:

KF,

I used the known relationship from Info to probabilities to infer the relevant probabilities based on non-intelligent stochastic processes…
I took time to go back to the rot of the situation — something studiously dodged above, and ground the fact that under abiotic circumstances we normally see racemic forms of organic molecules formed.

Whether we are talking about evolution or OOL makes no difference. Pure chance and design do not exhaust the possibilities.

Evolution is obviously more than pure chance since selection is nonrandom. But OOL is also nonrandom, because chemistry is not the random assembly of atoms into molecules. CH4 is a possible molecule; CH6 isn’t. Chemistry involves nonrandom rules and very strong nonrandom electrical forces.

You can’t model it with a flat distribution.

Blind statistics based on biases will lead to gibberish with high reliability…

True, and that’s a pretty accurate assessment of your argument.

Now as for what the chance based hyps are, obviously they are blind search mechanisms, if design is excluded…

Untrue. Neither evolution nor OOL is a blind search.

Blind search is when you pick search points completely randomly out of the entire search space, then turn around and do the same thing again.

In evolution, by contrast, you start from wherever you are in the search space and search only those areas that are within the reach of mutation — a tiny subset of the entire search space. If any of those small areas contains a viable configuration, then you repeat the process, starting from that configuration and searching only the tiny subset of the search space that is reachable from it by mutation. It’s highly nonrandom and nothing like a true blind search, though there is a random component to it.

OOL is the same. You don’t pick a spot in the search space by taking a large number of atoms at random and blindly throwing them together, then repeating the process. You start from whatever molecules you already have, and you see which tiny portions of the search space you can reach from there. Then you repeat the process. There’s randomness involved, but you are not searching the entire space — only a tiny subset.

All of your emphasis on the gargantuan size of the search space is therefore misplaced. It’s not the overall size of the space that matters, but the size of the space being searched at each step.

Now as to specifics, it is well known that evolutionary mechanisms warranted from empirical grounds relate to chance variations at mutation level and at expression and organisation level…

Mutations are random with respect to fitness. Selection isn’t.

And as doe WR400?s demand that I identify the contents of H, that is funny, it is an implicit admission of absence of empirically warranted mechanisms.

He’s asking you to enumerate the contents of H because it is apparent that you have neglected to include anything but pure chance. The fact that you won’t answer his question is an implicit admission that you cannot justify your CSI and P(T|H) values.

In any case the info to antilog transformation step says in effect that per the statistics [especially redundancy in proteins and the like that reflect their history and however much of chance processes have happened, and whatever survival filtering happened that is traced in the statistics] the information content implies that BLIND processes capable of such statistics will face a probabilistic hurdle of the magnitude described.

Evolution and OOL are blind, but not blind in the way you are using the term above. See my remarks above on why evolution and OOL are not blind searches.

357. 357

PaV

What is the “null” that neo-Darwinism assumes? I would think it would be exactly the same.

It would depend on the specific hypothesis.

But for natural selection in a population, for instance, your null might be “random walk” aka “drift”.

But you’d still have to compute your expected distribution under the null very carefully.

358. 358
CS3 says:

CS3,

Okay, but I urge you to keep thinking about it.

Particularly this part:

Do you see that the second law is as irrelevant to your doubts about evolution as the the first law would be to your doubts about gerbil-poofing?

While an analogy such as this (which, among other things, makes an analogy between the First and the Second Laws), will be rather strained, I will suggest we should add something like the following to make it more accurate:

Later, someone (let’s call him “Sewell”) recognizes that, in the process of making his careful measurements, your friend (let’s call him “Styer”) has used two scales with completely different calibrations in determining the mass of the gerbils and of the furniture, thus making his proof that the First Law has not been violated completely invalid. Furthermore, the mass of all those gerbils very strongly appears to be much more than the mass that has disappeared from the furniture. Nevertheless, (for the sake of this analogy), it is not possible to measure the gerbils and the furniture with equivalently calibrated scales, and thus it cannot be definitively proven either way whether the First Law has been violated, though it intuitively appears that it has. Militant neo-gerbilpoofists attempt to suppress “Sewell’s” paper showing the miscalibration of “Styer’s” scales, because they agree with “Styer’s” conclusion, even though they know deep down that the calibration of “Styer’s” scales is indefensible. 🙂

359. 359
Thomas2 says:

keiths –

Thank you for your reply (of July 7, 5:56, #285 as I write).

One way of compactly formulating ID in words as a putative scientific theory (“theory” being used in its common scientific sense rather than in the more formal sense usually cited in ID /Creationism -v- Darwinism debates) might be something like (based mainly on Dembski) –

“where in nature you encounter an entity which features appropriately statistically significant tractably (demonstrably relevant) conditionally independently specified complexity (quantifiably beyond the plausible reach of chance and necessity alone), then you can reliably make an unequivocal design inference (propose intelligent design – the planned action of a mind – as hypothesis to account for those features)”.

In other words, suitably defined/delimited “organised”, “functional” or “functional” “complexity” is a reliable indicator of design in certain circumstances.

Darwinian evolution proposes to account for increasing organised/functional/specified complexity in biological organisms without invoking mindful design.

In terms of the 2nd Law (as I very basically understand it), these Darwinian processes must therefore generate an increase in such organised/functional/specified complexity by being compensated from greater reductions in organised/functional/specified complexity elsewhere within the same system.

Since the essence of Darwinian evolution is that Darwinian processes basically operate by selective mechanisms acting on differentially fit self-replicating systems to promote those which will generate greater numbers of self-replicating offspring as a result, rather than by selective mechanisms acting directly and proportionately upon systems characterised by their organised complexity, if Darwinian processes are to succeed in increasing biological complexity their must, on average, be a correlation between increases in selectable/selected fitness and organised complexity.

Hence my tentative suggestions that for Darwinian processes alone to successfully do the work attributed to them in a way consistent with the 2nd Law, (i) natural selective processes must fulfil the role of a heat pump (as an integral and essential part of the thermodynamic system under consideration), and (ii) there must be a significant positive correlation between appropriately quantified increases in [selectable fitness] and appropriately quantified increases in complexity, and hence my question (assuming that this approach is on the right track) as to whether there was any empirical evidence to support such a relationship.

To put it another way, I do not know whether Darwinian processes alone can accomplish what is claimed for them. If they can, then clearly they do not violate the 2nd Law. On the face of it, the Darwinian claim to be capable alone of generating local increases in functionally complex order seems implausible (to say the least). If, however, a positive correlation of the kind I have suggested above were to be demonstrated, then not only would conformity to the 2nd Law be demonstrated, but this kind of objection would be pretty well nailed, and this particular apparent absurdity resolved.

The biggest problem with my suggestion would be (I suspect) in appropriate quantification.

Regarding the suggestion that there is an inconsistency between Creationist and IDer “thermodynamic” objections to Darwinian processes and their views on the “thermodynamics” of ordinary biological life processes, the difference between the proposed Darwinian mechanism and ordinary biological life processes (in statistical or complexity “thermodynamic” terms) is that biological reproductive systems already exist and fulfill the role of “heat pumps” whereas Darwinian systems have yet to be shown to be able to generally act in that way, so I too see little problem here in those terms. (I personally have a greater problem in finding a suitable way of quantifying these issues).

[On another note, I find your illustrations very helpful: I rather liked the furniture-into-gerbil example, and I think there is some interesting discussion that might be had around that, but that’s probably one for another day].

360. 360
keiths says:

Are you there, cantor?

361. 361
keiths says:

Thomas2,

In terms of the 2nd Law (as I very basically understand it), these Darwinian processes must therefore generate an increase in such organised/functional/specified complexity by being compensated from greater reductions in organised/functional/specified complexity elsewhere within the same system.

No, because complexity is not the inverse of entropy. For example, the entropy of an assembled computer is not necessariy less than the entropy of a corresponding pile of computer parts.

This is discussed several times in the recent second law threads, so I won’t rehash it here.

[On another note, I find your illustrations very helpful: I rather liked the furniture-into-gerbil example, and I think there is some interesting discussion that might be had around that, but that’s probably one for another day].

Yes, sometimes extreme illustrations are the best, because they make the issues stand out clearly.

362. 362
cantor says:

KS@350
…probability of a little over 11%.

Nicely done.

Unanticipated obligations to friends, family, and others in need has taken precedence for the time being over my playtime here.

363. 363
kairosfocus says:

KS obfuscates and habitually misrepresents rather than explains. He and ilk were trying to use p(T|H) in the Dembski 2005 expression as an objection, going so far as to try to turn it into a clever quip about elephants in rooms. I took the expression, pushed it one simplification step forward, to show that -log (P) is an info metric — the – log operation is already present just not worked out. Thanks to Durston et al we credibly know info content of 15 protein families, with distributions that reflect whatever has actually happened. So, substitute, then reconvert to probability form to get reasonable estimates. Ans, well below what it is credible our solar system could reasonably find on blind search of the relevant config space. Also, going back to OOL, the same basic message comes up, just from homochirality, an independent line. But ideologues will keep setting off rhetorical IEDs in hopes of creating an impression of having a point, and being able to keep on pushing strawman arguments. BTW, it is now coming on 10 months that darwinists have not been able to answer to credibly accounting for OOL and major body plans on evo mat premises backed by adequate observational evidence not imposed materialist a prioris. No root, no shoot and no empirically well warranted macro evo tree of life. KF

364. 364
keiths says:

cantor,

I understand, and those things take priority over UD, of course.

I’m just curious about your point in posing the challenge. I hope you’ll let us know when you have more time.

365. 365
keiths says:

KF,

I took the expression, pushed it one simplification step forward, to show that -log (P) is an info metric — the – log operation is already present just not worked out.

Finally (!) you acknowledge that the log operation doesn’t add any information. The result is just a probability in a different form, expressed in bits.

So you take Durston’s numbers — which assume random draw, not evolution, as Lizzie has pointed out a dozen or so times — and you convert them into probabilities which also assume random draw, not evolution.

Random draw, not evolution.

So all this blather about “the gamut of the solar system” proves nothing about evolution. It merely shows that protein families weren’t formed by purely random processes.

Which nobody claims anyway.

You, who are always going on about strawmen, have created a whopper of a strawman, which you have set alight in a vain attempt to cloud, poison and polarise the atmosphere, and to thwart the homosexual agenda, which poses a serious threat to our civilization.

The onlookers can see that you are bluffing. Again.

366. 366
Thomas2 says:

keiths –

I have followed the earlier discussions of whether the entropy of an assembled computer is less than that of an unassembled computer, and I’m yet not convinced as to who is really right on this.

What proponents of the “Darwinian evolution violates the 2nd Law” view are usually referring to (as we can see in the foregoing threads) is that the 2nd Law is seemingly violated in statistical/probabilistic terms, not strict thermodynamic or energy terms. A system exhibiting a less probable macro-state would have lower entropy (understood probabilistically) than one with a more probable macro-state. An unassembled computer can take up many more configurations than an integrally functioning assembled one, and so exhibits a more probable macro-state than an assembled one, as measured in terms of functional organisation (as others have noted).

In their writings both Richard Dawkins (obviously no friend of ID) and William Dembski consider that “organised complexity” or “specified complexity” can be measured as improbability – that is, that there is an inverse relationship between organised/specified complexity and probability. Thus, in statistical terms (assuming that such a relationship is valid), systems having greater organised/specified complexity than others can be considered to have lower entropy than those others.

That is why I suggested that Darwinian processes would have to work in a role equivalent to a heat pump (with, on average, a positive linkage/correlation between increases in selectable fitness and increases in functional complexity) if Darwinian evolution alone is able to produce the organised complexity claimed for it.

It seems to me that this is a question that could, in principle, be resolved empirically, by establishing the existence of such a linkage/correlation.

From my perspective, however, the real problem with such a proposal is in arriving at a consistent measure of organised/specified complexity. (I am aware of longstanding ID proposals to quantify functional complexity, including some examples in the threads above, but I am not sure how successful/consistent they are across the whole spectrum of relevant examples of complexity in nature – the problem in part being the quantification of the “organised” bit of the complexity. I am conscious, however, that my problem here may in part just be that of my current ignorance).

Looking at this particular question (quantification), the computer example to which you and Dr Liddle have both referred appears to me to illustrate this very problem.

367. 367
kairosfocus says:

Onlookers, KS is now being outright misleading in this strawman, trying to manufacture a concession. As I pointed out, extracting the – log P takes us to the info metric. Durston et al have an independent, empirical value of relevant info from their work on protein families, moving from 4.32 bits per AA in null state to ground and functional state reflecting variants in the island of function from across the observed world of life (and whatever stochastic processes may have contributed.) So, substituting a known I value, we may go back and get the P-value. No surprise, it is consistently below the threshold where any blind search across the config spaces on the gamut of the solar system, could reasonably be expected to find one much less the many distinct islands required. At this point, since we know him to be a highly educated ideologue, he is carrying on a disinformation and willful misrepresentation campaign to sow confusion and polarisation. Such exploits the tendency we have to think there must be substance there, but sometimes, there is only the flame, smoke and poisons of burning, ad hominem soaked strawmen. It is time to set KS’ antics and stunts to one side save as illustrations of what agenda-driven materialist ideologues and fellow travellers are doing. Which is why I took time to note here. KF

368. 368
keiths says:

KF,

Durston et al have an independent, empirical value of relevant info from their work on protein families…

Durston’s values assume random draw. Evolution doesn’t work that way.

I repeat:

Random draw, not evolution.

So all this blather about “the gamut of the solar system” proves nothing about evolution. It merely shows that protein families weren’t formed by purely random processes.

Which nobody claims anyway.

You, who are always going on about strawmen, have created a whopper of a strawman, which you have set alight in a vain attempt to cloud, poison and polarise the atmosphere, and to thwart the homosexual agenda, which poses a serious threat to our civilization.

The onlookers can see that you are bluffing. Again.

369. 369
kairosfocus says:

KS: I simply note that you are playing strawman distortions again, as, as a matter of fact that Durston’s values are empirical based on the statistics of observed protein families, not the flat random null state, cf. here. That basis in empirics will reflect whatever actual, relevant patterns have happened in the history of life. As in, because there is some redundancy OBSERVED, he adjusted and reduced info capacity below 4.32 bits per AA, reflecting he actual functional state. The null state was flat random, which he did not use to give his results. This may be readily seen from table 1, here where the null state column (col no 4 from left) is NOT the one that gives the functional bits values reported after adjustments for redundancy reflected in aligned segments, col 5. The values I used came from the Fits column. KF

370. 370
kairosfocus says:

Onlookers, the just above should suffice to show just how wanting in credibility, diligence to duties of care to accuracy and fairness KS has so often shown himself to be. Let us take due note before taking any of his talking points at face value. Unfortunately, on long track record, it is predictable that for a long time to come he will persist in corrected misrepresentations and ad hominem laced projections such as just happened. KF

371. 371
keiths says:

KF,

As in, because there is some redundancy OBSERVED, he adjusted and reduced info capacity below 4.32 bits per AA, reflecting he actual functional state.

That doesn’t help. As Lizzie has been trying, apparently in vain, to communicate to people here: probability is not an inherent property of a sequence. You have to specify the generating mechanism in order to compute the probability.

What’s the probability of flipping 500 heads in a row? Extremely low if you’re flipping a fair coin. Extremely high if you’re flipping a two-headed coin. The generating mechanism makes all the difference.

You have to specify the mechanism (or mechanisms) in order to evaluate the probability.

Dembski himself stipulates that P(T|H) must include all “Darwinian and material mechanisms”.

Durston hasn’t done that, nor have you. Your CSI values are bogus, and so is your conclusion of design.

372. 372
Joe says:

keiths:

Dembski himself stipulates that P(T|H) must include all “Darwinian and material mechanisms”.

Durston hasn’t done that, nor have you.

Again keiths is confused. YOUR position needs to do that keiths. And it cannot. Your position can’t even produce a testable hypothesis for “Darwinian and material mechanisms” producing multi-protein configurations.

And keiths, YOU don’t have any idea how evolution works. Sp perhaps you should stuff a sock in it.

YOU can’t even show that “Darwinian and material mechanisms” deserve a seat at the probability discussion. That is how pathetic your position is.

373. 373
kairosfocus says:

Onlookers:

Predictably, correction of KS does not take, and he simply will not acknowledge where he did something very wrong.

And because of the back-link from info (the log-antilog relationship) and the known variability of the AAs in protein sequences — which gives an empirically grounded estimate of degree of contingency in the functional “code,” we do have a valid measure of info and how whatever processes obtained across the history of life were able to vary the genome and have it still function on the relevant proteins.

The message is, the info content is well beyond whatever blind search capacity on the gamut of the solar system in 10^17 s could muster, and the linked message is, back-converting to a probability metric established from the info content, we see the probabilities are well below a solar system search threshold where blind search will be a reasonable means.

KS is eager to brush aside and dismiss then have us forget the implications of only being able to make a search on the scope of 1 straw to a cubical haystack 1,000 light years (as thick as our galaxy) but that ideological desire does not make that go away.

Worse, for neither OOL nor origin of body plans, can KS and ilk show us empirical warrant for claims, assertions, imaginings and just plain assumptions that blind chance and mechanical necessity under any plausible format, can and did produce requisite functionally specific complex organisation and associated info. On OOL, notice, we have a direct knowledge of the probabilities of L-/R- hand monomer formation, and of peptide vs non-peptide bonds, about 50% in both cases.

Just on chirality, handedness, we can see that the macromolecules of life, which are one-handed (not a 50-50 mix of the two geometries) we see easily that we are well beyond the FSCO/I threshold, well below the reach of blind sampling on the gamut of the solar system in 10^17 y. This is sending a very clear message that from the root of the Darwinist tree of life on up, there is no good reason to reject the inference that the only empirically warranted cause of FSCO/I was credibly operative, i.e. design.

As for his eagerness to push evolutionary materialist amorality and the ethic of might and manipulation make ‘right,’ that speaks, sadly, for itself.

The bottomline is obvious, we are dealing with a large scale, many decades long socio-cultural and policy agenda driven by a priori — question-begging — evolutionary materialism as ideology that for prestige and apparent credibility reasons finds it convenient to wrap itself in a lab coat.

Johnson’s retort to Lewontin et al was and is on target:

For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”

. . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

KF

374. 374
kairosfocus says:

F/N: Some may wonder if I am right to point to a priori materialist ideology as a problem, so I suggest reading here on in context for some documentation. This is by no means a case of mere scientific knowledge being opposed by the ignorant, stupid, insane or wicked, as is too often suggested or as has been outright said by Dawkins. KF

375. 375
kairosfocus says:

F/N 2: Likewise, you may wish to read here on the particular issue of the day being wrapped in a lab coat and pushed on us under colours of “rights” and “equality.” This on some of the manipulative techniques [critique, here], may help us understand how for too many in our day might and manipulation make ‘right.’ KF

376. 376

KF:

This is by no means a case of mere scientific knowledge being opposed by the ignorant, stupid, insane or wicked, as is too often suggested or as has been outright said by Dawkins

Indeed it is not, KF. And likewise methodological naturalism is not an attempt by the ignorant, stupid, insane or wicked to impose an evil godless ideology on the innocent, either.

Hence my continued attempts to find some common ground.

377. 377

KF: it seems to me that you are simply not taking in the point that keiths and I have repeatedy made; clearly you think the same of us, but it would be good if you could at least try to address the point, because your responses consistently address something we are not saying. Let me try another time:

Onlookers:

Predictably, correction of KS does not take, and he simply will not acknowledge where he did something very wrong.

And because of the back-link from info (the log-antilog relationship) and the known variability of the AAs in protein sequences — which gives an empirically grounded estimate of degree of contingency in the functional “code,” we do have a valid measure of info and how whatever processes obtained across the history of life were able to vary the genome and have it still function on the relevant proteins.

This simply does not address the issue of how you compute the probability of the sequence given any constraints other than independent random selection. The log-anti-log relationship does NOT address this. Converting a probability into and out of log form does nothing to it value or its derivation. You are clearly a very numerate guy, KF: you must surely agree with this.

The message is, the info content is well beyond whatever blind search capacity on the gamut of the solar system in 10^17 s could muster, and the linked message is, back-converting to a probability metric established from the info content, we see the probabilities are well below a solar system search threshold where blind search will be a reasonable means.

No. All you are doing here is taking a probability value, and seeing if it less then 10^-150. It doesn’t matter whether you log tranform both the probability and the cut off first, or not. The answer won’t change. We are not questioning this – we get it.

What we are questioning is the probability value itself. It does not take into account the change in probability value that would be a result of processes other than independent random draw, which is precisely what evolutionary theory proposes. You are rejecting a null, but that null is NOT “evolutionary processes”.

KS is eager to brush aside and dismiss then have us forget the implications of only being able to make a search on the scope of 1 straw to a cubical haystack 1,000 light years (as thick as our galaxy) but that ideological desire does not make that go away.

No. He. Is. Not. He is not questioning the implications of your p value. He is questioning the p value itself

Please address this – it is so frustrating to have you repeatedly defend your alpha cut-off which no-one is disputing. What we are disputing is your p value!

Worse, for neither OOL nor origin of body plans, can KS and ilk show us empirical warrant for claims, assertions, imaginings and just plain assumptions that blind chance and mechanical necessity under any plausible format, can and did produce requisite functionally specific complex organisation and associated info. On OOL, notice, we have a direct knowledge of the probabilities of L-/R- hand monomer formation, and of peptide vs non-peptide bonds, about 50% in both cases.

Well, that is a different argument, KF. If your argument is that OOL is improbable, fine. But you still can’t compute its probability, and “protein space” won’t help because we don’t even know whether the first Darwinian life-forms involved proteins at all, or, if they did, what selective advantage they might have conferred.

Again: you cannot compute a p value without knowing what your p value is the probability of. As I keep reminding my stats students ad nauseam! First compute the probability distribution under your null. Then, and only then, can you decide whether to reject it or not.

What your rejection criterion is can be anything you like. I’d be perfectly happy with something a lot more lenient than p < 10^-150.

Just on chirality, handedness, we can see that the macromolecules of life, which are one-handed (not a 50-50 mix of the two geometries) we see easily that we are well beyond the FSCO/I threshold, well below the reach of blind sampling on the gamut of the solar system in 10^17 y. This is sending a very clear message that from the root of the Darwinist tree of life on up, there is no good reason to reject the inference that the only empirically warranted cause of FSCO/I was credibly operative, i.e. design.

No, because you haven’t computed the p value based on anything other than independent random draw.

As for his eagerness to push evolutionary materialist amorality and the ethic of might and manipulation make ‘right,’ that speaks, sadly, for itself.

The bottomline is obvious, we are dealing with a large scale, many decades long socio-cultural and policy agenda driven by a priori — question-begging — evolutionary materialism as ideology that for prestige and apparent credibility reasons finds it convenient to wrap itself in a lab coat.

Oh, do shed this ridiculous paranoia, KF! A naturalistic account of the origin of life is absolutely NO threat to the things you value. It’s no threat to theism (improves it, I would say) and certainly has no relationship to gay marriage. The link is absurd, and I suggest that your inability to see the question we are asking, and instead to insistently repeat the same counter-argument to an argument no-one is making, arises from your irrational fear that if we were right (and I do not even claim that we are) somehow all that you hold dear would come crashing down.

It wouldn’t.

378. 378
kairosfocus says:

F/N 3: BTW, we must beware of manipulative redefinitions as has been put above; the tactic here being to so so much misinformation that one cannot easily correct it all. And if one tries, the amount of correction that has to be put, will then be subjected to accusations we have seen of spamming the thread or the like or some excuse to ignore. We have seen a case where a point by point refutation exposing something as a strawman tactic and then correcting it with substantial backup has been blandly dismissed as not an answer, this being repeated drumbeat style hither and yon as though saying an outright willful distortion of truth like that makes it true [a clue: failure to link the actual refutation being dismissed (the case is here) will often tell us that something is being hidden.] A blind search is a non foresighted one, without oracles that allow unsuccessful cases to be rewarded on warmer/colder messages. That is, if there is nothing that squarely addresses the vast dominance of non-functioning configurations in the space of possibilities for AAs or D/RNA etc, and the pretence that every thing will be functional to some extent so all that is needed is incremental hill climbing to ascend to the peaks, we are not dealing with a blind search. If there is no serious reckoning with the deep isolation of islands of function in cases relevant to FSCO/I (caused by the need for multiple, well matched, properly arranged and coupled parts to achieve function — e.g. symbols forming coherent text in ASCII coded English), we are being manipulated yet again. KF

379. 379

A blind search is a non foresighted one, without oracles that allow unsuccessful cases to be rewarded on warmer/colder messages.

And Darwinian evolution is not blind search.

380. 380
kairosfocus says:

Dr Liddle

Maybe it has not sunk in to you that at this point, it is plain that for months you have harboured slander against me, have tried to deny this and blame me for objecting, then have tried to justify it and pretend that nothing seriously wrong was done.

That has fundamentally changed how I view anything you have to say.

And that is on top of a longstanding problem on your part of making assertions that have been corrected as though repeating error drumbeat style turns it into fact. And not to mention your red herrings, strawmen tactics and squid ink cloud evasions of equally longstanding basis.

I am not going to repeat myself over and over again in endless circles in response to an ideologue who — on abundant evidence — willfully harbours slander and will not change her mind if the truth were staring here eye to eye in the face.

And if you think it is frustrating or irritating to be talked past — even if only to brush aside red herring and strawman distractors, think about how you have treated others for months and years.

I for one will not entertain another dragged out whirling circle of obfuscations, evasions and distractions, backed up by willful obtuseness.

The below is for record, and will be posted here just once.

For some time you have been pretending that the sub expression P(T|H) in Dembski’s 2005 metric gives you an out to reject the concept of complex specified information. On further pretence that it cannot be quantified, as the ways and means of chance working cannot be identified directly.

Joe, aptly, keeps pointing out that that already shoots your own case through its heart, as in fact chemical evolutionary then Darwinist and the like evolutionary mechanisms have ZERO empirically observed record of being able to originate body plans from the first up. You — along with many others — are throwing up an empirically ungrounded speculation driven by a priori assumptions of materialism or the practical equivalent, dressing it in a lab coat and announcing it as science.

Two years and more ago, when this particular objection first came up under the stolen web persona MathGRRL — there is a Calculus professor who uses it legitimately — we took time to show that by doing a log reduction, the Dembski 2005 metric goes to an explicit information metric, so once we can evaluate the information we do not need to try to work out P(T|H) since we know what it results in once reduced, and can empirically identify that info and deduce a reasonable threshold that addresses the rest of the expression.

Onlookers, for you, I give the link here where this has been there for all to see for 2 years.

The log reduced result is:

Chi_500 = I_p * S – 500, functionally specific bits beyond the solar system threshold (S being a dummy variable 0 by default, 1 where there is objective reason to see that something is functionally specific, I an info value and 500 the 1,000 LY haystack threshold)

On this basis it is easy to see from I-values from Durston et al, that protein families as follows as just one example are beyond the threshold:

RecA: 242 AA, 832 fits, Chi: 332 bits beyond
SecY: 342 AA, 688 fits, Chi: 188 bits beyond
Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond

This is consistent with the message from chirality at OOL, but applies to formation of proteins in living forms. It is these I-values that KS tried to mislead the public, as being based on a flat random distribution. I already pointed out from Durston’s Table I, that they are not. (Notice, no acknowledgement of the correction and its significance from that obviously dark triad ideologue.)

Now, in the exchange where I responded initially, I took time to show that whether a sample is flat or biased, it will leave a signature of its action in what happens. (I used a set of letters in the proportion of English, flat random sampled with replacement, then sampled in a biased way with replacement; the statistics of the result will show the trace of the bias and redundancies, i.e the avge info per symbol automatically draws in the biases and reflects them, so we can draw the estimates of probability per symbol on average back out again — and all we need is that average. We are looking for the weighted average H not pi, and a decomposition of pi log pi across the i’s.)

Where of course, as already pointed out, we need some empirical warrant that the claimed means of blind chance and mechanical necessity are able to actually generate FSCO/I, per observation. Missing.

I need to briefly point out that in the claimed darwinist mechanism it is chance variation (CV) not differential reproductive success (DRS) that has to be responsible for claimed descent with modification up to and including novel body plans (RWM), as RES is a short hand for saying inferior or unlucky varieties die out and their info is subtracted from the pool of the population. That is, we are left to the notion that lucky noise writes FSCO/I.

For which there is nowhere any good empirical substantiation, of body plans being actually observed to arise by such.

Now, what I did most recently is to simply work back from the empirically grounded functional bits values of Durston et al (which one can see from Table I are NOT based on 4.32 bits per character [cf excerpt here], taking into account such redundancies as occur and such variabilities as occur)

That is, I reversed the expression I = – log P to get a P value, cf here:

since we know the info values empirically, and we know the relationship that I = – log_2 (P), we can deduce the P(T|H) values for all relevant hypotheses that may have acted by simply working back from I:

RecA: 242 AA, 832 fits, P(T|H) = 3.49 * 10^-251
SecY: 342 AA, 688 fits, P(T|H) = 7.79 * 10^-208
Corona S2: 445 AA, 1285 fits, P(T|H) = 1.50 * 10^-387

That is, the power of the transform allows us to apply an empirical value to what is a more difficult to solve problem the other way. once we do know the info content of the protein families by a reasonable method, we can then work the expression back ways to see the value of P(T|H). And so lo and behold we do not actually have to have detailed expositions on H to do so, once we have the Information value, we automatically cover the effect of H etc.

As was said long since but dismissively brushed aside by EL and KS.

And consistently these are probabilities that are far too low to be plausible on the gamut of our solar system, which is the ambit in which body plan level evolution would have had to happen. (Indeed, I could reasonably use a much tighter threshold, the resources of earth’s biosphere, but that would be overkill.)

Now, do I expect EL and KS to accept this result, which boils down to evaluating the value of 2^-I, as we have I in hand empirically?

Not at all, they have long since shown themselves to be ideology driven and resistant to reason (not to mention enabling of slander), as the recent example of the 500 H coin flip exercise showed to any reasonable person.

But this does not stop here.

Joe is right, there is NO empirical evidence that darwinisn mechanisms are able to generate significant increments in biological information and thence new body plans.

All of this — things that are too often promoted as being as certain as say the orbiting of planets around the sun or gravity — is extrapolation from small changes, most often loss of function that happens to confer an advantage in a stressed environment such as under insecticide or sickle cells that malaria parasites cannot take over.

Of course, such is backed by the sort of imposed a priori materialism I highlighted earlier today.

What is plain is that the whole evolutionary materialist scheme for origin of the world of life, from OOL to OO body plans and onwards to our own origin, cannot stand from the root on up.

But this is not all.

There is a far more fundamental problem that you and others have been ducking and dodging, which you as a professional investigator using statistical methods full well must know but have consistently been unwilling to acknowledge, not even when I went to the extent of setting up an instructive thought exercise example.

As in make up a bristol board normal curve, marked with 1 sd stripes to each side of the mean at the peak. Make it what 40 cm high to reflect the peak conveniently. Ten go out to 5 – 6 SDs on the tails. And yes, I know at 4 – 6 SDs it will be impossibly skinny — at some point about the cross section of a bacterium. That is the exact point.

Then, drop darts from an elevation so that we get a somewhat flat or even a bit peaked of a distribution. Count hits per stripe and see how after about 100, with “fair” strikes, we would have a pretty good picture of the bulk of the curve. With all but certainty, the far tails will not be hit. This is the needle in haystack effect, it is very hard to hit a rare, distinctively identifiable zone in a space of possibilities. This will also happen even with biased distributions. Once we are not making enough samples to expect reasonably to pick up the far tails or the like in a field of possibilities, we have no good reason to expect to see such cropping up. They are unobservable under the circumstances.

So, if we are in the far tails when they should not be observable, it is not likely to be by chance, i.e. the basis for the Fisherian hyp testing programme (and I am speaking loosely) is evident.

This is not dependent on precise estimates of probability, or even on rough ones. It depends only on that we are dealing with blind even somewhat biased samples of a distribution with a bulk and zones of interest that are overwhelmingly isolated relative to that bulk. (E.g. when we push our hands into a sack of beans to pull out some and see their quality, or pull up a blood sample from a vein, this does not bother overmuch over niceties on whether we have demonstrated that we have a truly flat random sample, and for good enough reason.)

Now, it can be shown that he atomic resources of our solar system, used to sample from a field of possibilites for 500 bits, for 10^17 s [of order of the generally accepted age of the cosmos] and at a rate of one sample per astom of 10^57 atoms per 10^-14 s, will stand as a scope of 1 straw sized pull to a haystack 1,000 LY across, as thick as our galaxy’s central bulge. Even if such a haystack were superposed on our galactic neighbourhood, with practical certainty, such a pull will pick up a very predictable result: hay and nothing else. That is, it will only capture the bulk.

Under these circumstances, the possible blind sampling hyps don’t matter, unless they are large enough they will not do better than a flat random one would of the same scope. And if we were to see something labelled chance and necessity without intelligent direction that picked up needle under circumstances claimed to be like this, we would have every good reason to be highly suspicious that design was being relabelled as chance.

In short all the hooting and hollering over chance hyps is a red herring lead away from the sampling challenge, and led away to strawmen set up and soaked in ad hominems, set alight to cloud, confuse, poison and polarise the atmosphere.

Until it can be shown under reasonably credible circumstances, per observation, that blind chance and mechanical necessity in a soup of chemicals that are reasonable, will yield a metabolising, encasulated, gated, von Neumann architecture code using self replicating cell, there is no root tot he whole tree of life and it sways and crashes to the ground.

I am not holding my breath for that, for good thermodynamic reasons.

I therefore point out that we do have an alternative that is backed up by billions of un-exceptioned cases where we do directly see FSCO?I being formed: design. And the sampling analysis shows why that is so.

Similarly, we now have design sitting at the table from the root on up.

So, when we look at the 10 – 100 million bits of info to form body plans, per reasonable observation and a back of the envelope cross check, we see that it is reasonable to assign that to design also.

Once design is not excluded by question-begging a priori materialist ideology, it is the blatantly obvious, empirically warranted best explanation of the FSCO/I in the world of life from OOL to us.

That is what you need to answer and it is what you have ducked and dodged aside from for years, here at UD and elsewhere.

KF

381. 381
kairosfocus says:

F/N: I have shown just above, yet again, that darwinist search depends on chance variation to generate info in a non-foresighted way, as has been repeatedly shown for years literally and ignored; it then hopes to hill climb by selection that subtracts some of the info generated. this already begs the question of getting to the islands where there are hills, design is interested primarily in how to get to such islands, hill climbing being at most micro evo. Blind here means non foresighted as the objectors trying to throw up yet another red herring and strawman distraction full well know; there is no need for a random variable to be equivalent to a flat random one, just that it follows a distribution that is not determined by controllable input values. Yet another squirt of squid ink cloud. KF

382. 382
kairosfocus says:

F/N 2: Just to cut off more distractions, here is so simple a search as Wiki:

In probability and statistics, a random variable or stochastic variable is a variable whose value is subject to variations due to chance (i.e. randomness, in a mathematical sense). As opposed to other mathematical variables, a random variable conceptually does not have a single, fixed value (even if unknown); rather, it can take on a set of possible different values, each with an associated probability. [–> which does not have to be equal, obviously.]

A random variable’s possible values might represent the possible outcomes of a yet-to-be-performed experiment, or the possible outcomes of a past experiment whose already-existing value is uncertain (for example, as a result of incomplete information or imprecise measurements). They may also conceptually represent either the results of an “objectively” random process (such as rolling a die), or the “subjective” randomness that results from incomplete knowledge of a quantity. The meaning of the probabilities assigned to the potential values of a random variable is not part of probability theory itself, but instead related to philosophical arguments over the interpretation of probability. The mathematics works the same regardless of the particular interpretation in use.

Random variables can be classified as either discrete (that is, taking any of a specified list of exact values) or as continuous (taking any numerical value in an interval or collection of intervals). The mathematical function describing the possible values of a random variable and their associated probabilities is known as a probability distribution. The realizations of a random variable, that is, the results of randomly choosing values according to the variable’s probability distribution, are called random variates.

383. 383

KF

That has fundamentally changed how I view anything you have to say.

Well, that’s pretty silly, KF. How can our disagreement as to what does and does not constitute slander make any difference as to whether transforming a p value in and out of base 2 logs makes any difference to the value?

Or, rather more importantly, to the validity of your computation of that p value?

I have shown just above, yet again, that darwinist search depends on chance variation to generate info in a non-foresighted way, as has been repeatedly shown for years literally and ignored; it then hopes to hill climb by selection that subtracts some of the info generated.

Nope. You are hopelessly confused. What “chance variation” does not “create information” except in the Shannon sense, and then only if it increases the number of bits, rather than reducing it. As mutations can consist of insertions, deletions, point mutations, and repetitions, whether the result is a net increase in Shannon entropy is more or less chance, although I would concede that genome-lengthening mutations are probably more common than genome-shortening ones.

What “selection” does, or rather, what happens next is that if those “chance” variations (which are drawn from a very narrow distribution around the parent) have confer greater or lesser parental success, than the parental version, then those with greater will become more prevalent, and those with less, less.

As a result, the “population” has acquired information as to what works best (increases reproductive success) in the current environment, represented by the relative prevalence of those sequences that confer it.

There is no mystery as to where this valuable information comes from – it comes from the environmental resources and hazards that the population has to navigate to persist.

You don’t turn Shannon entropy into useful information by expressing the probability as a negative 2 log. It gets turned into useful information by the process we call “selection”, as described above.

this already begs the question of getting to the islands where there are hills, design is interested primarily in how to get to such islands, hill climbing being at most micro evo.

Evolution is not a hill climbing algorithm. Fitness can go up as well as down, and ravines can be crossed – even plains. This has been shown in lab, in field, and in silico.

Blind here means non foresighted

Evolution is not blind. It is merely “short-sighted” if you insist on this metaphor. It cannot do something now in the knowledge that it will help in the future. It has no such knowledge. However, it does do something now that will help it now. Which is an anthropomorphic way of saying the obvious: Variants that reproduce better in the current environment will become more prevalent in the population. Even if that results in their losing a facility that might come in handy later. They can also retain many variants that are neutral, because of drift, and from time to time these do come in handy later. Finally, once you have a sequence that confers reproductive success, by the same token you have many exemplars of that sequence, vastly increasing the probability that one of them will undergo an enhancing mutations. This is why your “independent draw” model is so totally inadequate as the null. You can reject it easily, but in rejecting it, you are not rejecting evolution.

as the objectors trying to throw up yet another red herring and strawman distraction full well know; there is no need for a random variable to be equivalent to a flat random one, just that it follows a distribution that is not determined by controllable input values. Yet another squirt of squid ink cloud. KF

I’m afraid the squid here is you, KF. I am not talking about flat or other distributions. I’m talking about independent draws from such distribution, whether flat or otherwise. The fact that Durston considered non-flat distributions of amino acids is neither here nor there; their calculation was still based on independent draw.

The draws in evolution are the very reverse of independent; they are cumulative.

384. 384
Mung says:

Elizabeth Liddle:

I’m glad you like it, Mung. So would you like to apply it to my question as to whether a chaotic system like a tornado has more or less order-as-in-entropy than still air?

How hot is the still air? keiths hot-air hot?

“order-as-in-entropy”

more nonsense.

385. 385
Mung says:

IS still cold air more or less “ordered” than still hot air?

386. 386
Mung says:

Note the stillness of the response. What’s the entropy of that I wonder.

387. 387
Timaeus says:

keiths:

In 236, your gerbil example is not stated accurately enough.

The first law doesn’t say that “matter or energy can be created or destroyed as long as, any old place in the universe, an equal amount of matter or energy is created.” The point of the first law is to stress that existing matter or energy is *converted* to something else. So if the atoms making up the furniture are *converted* (by a process we can ascertain) into gerbils, then the first law is not violated. But if the atoms making up the furniture are literally poofed out of existence (i.e., not converted to gerbil atoms or to anything else, but simply annihilated), and if the atoms making up the gerbils are literally poofed into existence (i.e., not formed from the translation of existing energy into matter, or matter into a different kind of matter, but simply generated ex nihilo), then the first law has been violated even if the energy/matter losses and gains are completely balanced.

I assume that you were imagining that somehow the furniture matter was *converted* into the gerbil matter. In that case, you are right to say that the first law is not violated. But your scenario said nothing about conversion; it suggested that some matter was simply vanishing from the universe in one case and other matter was created ex nihilo in the other. That would be a violation of the first law, even if there was quantitative equivalence of loss and gain.