Uncommon Descent Serving The Intelligent Design Community

Failure of the “compensation argument” and implausibility of evolution

Categories
Biophysics
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Granville Sewell and Daniel Styer have a thing in common: both wrote an article with the same title “Entropy and evolution”. But they reach opposite conclusions on a fundamental question: Styer says that the evolutionist “compensation argument” (henceforth “ECA”) is ok, Sewell says it isn’t. Here I briefly explain why I fully agree with Granville. The ECA is an argument that tries to resolve the problems the 2nd law of statistical mechanics (henceforth 2nd_law_SM) posits to unguided evolution. I adopt Styer’s article as ECA archetype because he also offers calculations, which make clearer its failure.

The 2nd_law_SM as problem for evolution.

The 2nd_law_SM says that a isolated system goes toward its more probable macrostates. In this diagram the arrow represents the 2nd_law_SM rightward trend/direction:

organization … improbable_states … systems ====>>> probable_states

Sewell says:

“The second law is all about using probability at the microscopic level to predict macroscopic change. […] This statement of the second law, or at least of the fundamental principle behind the second law, is the one that should be applied to evolution.”

The physical evolution of a isolated system passes spontaneously through macrostates with increasing values of probability until arriving to equilibrium (the most probable macrostate). Since organization is highly improbable a corollary of the 2nd_law_SM is that isolated systems don’t self-organize. That is the opposite of what biological evolution pretends.

See the picture:

cs1

Styer’s ECA.

Since the 2nd_law_SM applies to isolated systems the ECA says: the Earth E is not a isolated system, then its entropy can decrease thanks to an entropy increase (compensation) in the surroundings S (wrt to the energy coming from the Sun). Unfortunately to consider open the systems is useless, because, as Sewell puts it:

“If an increase in order is extremely improbable when a system is closed, it is still extremely improbable when the system is open, unless something is entering which makes it not extremely improbable.”

Here is how Styer applies the ECA to show that “evolution is consistent with the 2nd law”.
Suppose that, due to evolution, each individual organism is 1000 times more improbable that the corresponding individual was 100 years ago (Emory Bunn says 1000 times is incorrect, it should be 10^25 times, but this is a detail). If Wi is the number of microstates consistent with the specification of an initial organism I 100 years ago, and Wf is the number of microstates consistent with the specification of today’s improved and less probable organism F, then

Wf = Wi / 1000

At this point he uses Boltzmann’s formula:

S = k * ln (W)

where S = entropy, W = number of microstates, k = 1.38 x 10^-23 joules/degrees, ln = logarithm.

Then he calculates the entropy change over 100 years, and finally the entropy decrease per second:

Sf – Si = -3.02 x 10^-30 joules/degrees

By considering all individuals of all species he gets the change in entropy of the biosphere each second: -302 joules/degrees. Since he knows that the Earth’s physical entropy throughput (due to energy from the Sun) each second is: 420 x 10^12 joules/degrees he concludes: “at a minimum the Earth is bathed in about one trillion times the amount of entropy flux required to support the rate of evolution assumed here”, then evolution is largely consistent with the 2nd law.

The problem in Styer’s argument (and in general in the ECA).

Although it could seem an innocent issue of measure units the introduction of the Boltzmann’s formula with k = 1.38 x 10^-23 joules/degrees in this context is a conceptual error. With such formula the ECA has transformed a difficult problem of probability (in connection with the arise of ultra-complex organized systems) into a simple issue of energy (“joule” is unit of energy, work, or amount of heat). This assumes a priori that energy is able to organize organisms from sparse atoms. But such assumption is totally gratuitous and unproved. That energy can do that is exactly what the ECA should prove in the first place. So Styer’s ECA begs the question.

Similarly Andy McIntosh (cited by Sewell) says:

Both Styer and Bunn calculate by slightly different routes a statistical upper bound on the total entropy reduction necessary to ‘achieve’ life on earth. This is then compared to the total entropy received by the Earth for a given period of time. However, all these authors are making the same assumption—viz. that all one needs is sufficient energy flow into a [non-isolated] system and this will be the means of increasing the probability of life developing in complexity and new machinery evolving. But as stated earlier this begs the question…

The Boltzmann’s formula in the ECA, with its introduction of joules of energy, establishes a bridge between probabilities and the joules coming from the Sun. Unfortunately this link is unsubstantiated here because no one has proved that joules cause biological organization. On the contrary, in my previous post “The illusion of organizing energy” I explained why any kind of energy per se cannot create organization in principle. To greater reason, thermal energy is unable to the task. In fact, heat is the more degraded and disordered kind of energy, the one with maximum entropy. So the ECA would contain also an internal contradiction: by importing entropy in E one decreases entropy in E!

The problem of Boltzmann’s formula, as used in the ECA, is then “to buy” probability bonus with energy “money”. Sewell expresses the same concept with different words:

The compensation argument is predicated on the idea […] that the universal currency for entropy is thermal entropy.

That conversion / compensation is not allowed if one hasn’t proved at the outset a direct causation role of energy in producing the effect, biological organization, which is in the opposite direction of the 2nd_law_SM rightward arrow (extreme left on the above diagram). In a sense the ECA conflates two different planes. This wrong conflation is like to say that a roulette placed inside a refrigerated room can easily output 1 million “black” in a row because its entropy is decreased compared to the outside.

Note that evolution doesn’t imply a single small deviation from the trend, quite differently it implies countless highly improbable processes happened continually in countless organisms during billion years. Who claims that evolution doesn’t violate the 2nd_law_SM, would doubt a violation if countless tornados always turned rubble into houses, cars and computers for billion years? Sewell asks (backward tornado is the metaphor he uses more). In conclusion Roger Caillois is right: “Clausius and Darwin cannot both be right.”

Implausibility of evolution.

Styer’s paper is also an opportunity to see the problem of evolution from a probabilistic viewpoint. You will note the huge difference of difficulty of the probabilistic scenario compared to the above enthusiastic thermal entropy scenario, with potentially 1,000,000,000,000 times evolution!
In Appendix #2 he proposes a problem for students: “How much improved and less probable would each organism be, relative to its (possibly single-celled) ancestor at the beginning of the Cambrian explosion? (Answer: 10 raised to the 1.8 x 10^22 times)”. Call this monster number “a”, Wi = the initial microstates, Wf = the final microstates, W = the total microstates. According to Styer’s answer (which is correct as calculation) we have:

Wf = Wi / a

The probability of the initial macrostate is Wi / W. The probability of the final macrostate is Wf / W. Suppose Wf = 1, then Wi is = a. W must be equal or greater a otherwise (Wi / W) would be greater than 1 (impossible). Therefore the probability to occur of the final macrostate is:

(Wf / W) equal or less (1 / a)

This is the probability of evolution of a single individual organism in the Cambrian:

1 on 10 raised to the 1.8 x 10^22

a number with more than 10^22 digits (10 trillion billion digits). This miraculous event had to occur 10^18 times, for each of other organisms.

Dembski’s “universal probability bound” is:

1 / 10^150

1 on a number with “only” 150 digits. Therefore evolution is far beyond the plausibility threshold. In conclusion: the ECA fails to prove that “evolution is consistent with the 2nd law”, and we have also a proof of the implausibility of evolution based on probability.

Some could object: “you cannot have both ways, if the ECA is wrong then Appendix #2 is wrong too, because it uses the same method, then the evolution probability is not correct”.
Answer: the method is biased toward evolution both in ECA and in Appendix #2. This means the evolution probability is even worse than that, and the implausibility of evolution holds to greater reason.

Comments
What is the probability that a low pressure zone will form in the atmosphere based on a statistical distribution of microstates?
In an intelligently designed universe, on an intelligently designed planet, the probability approaches 1, ie a certainty.Joe
March 30, 2015
March
03
Mar
30
30
2015
10:48 AM
10
10
48
AM
PST
Piotr:
Prokaryotes possibly account for most of the biomass on earth.
Yes they do. However unguided evolution cannot explain prokaryotes and it doesn't have a mechanism capable of getting beyond prokaryotes given them to start with. Are you really oblivious to that?Joe
March 30, 2015
March
03
Mar
30
30
2015
10:46 AM
10
10
46
AM
PST
Box: Probably group Zachriel reads “order” where it says “organization”. We read and understood the difference. It doesn't change the calculation of entropy. What is the probability that a low pressure zone will form in the atmosphere based on a statistical distribution of microstates? Let's make it easy. Assume a 50 millibar depression, a temperature drop of -5°C, and a kilometer^3 of air affected. What is the difference in microstates? What are the odds of this occurring due to chance fluctuations in the available microstates.Zachriel
March 30, 2015
March
03
Mar
30
30
2015
10:43 AM
10
10
43
AM
PST
#39 niwrad, Prokaryotes possibly account for most of the biomass on earth. They are enormously successful, and they wouldn't be if they hadn't been adapting to all possible ecological niches for billions of years. Don't underestimate bacteria. They invented most of the biochemistry inherited by the "higher" life forms. Since they also gave rise to eukaryotes, it isn't true that they have all "remained bacteria".Piotr
March 30, 2015
March
03
Mar
30
30
2015
10:43 AM
10
10
43
AM
PST
Would you care to calculate P(T|H) for any such minimally selectable function?
Provide H in a scenario in which there isn't a minimal selectable function until 5 different proteins are specifically arranged- starting without any of those proteins. Good luck...Joe
March 30, 2015
March
03
Mar
30
30
2015
10:40 AM
10
10
40
AM
PST
Piotr:
Bacteria just go on reproducing; they aren’t trying to hit a small target in a vast config space.
Yes, we all know that it is impotent. Bacteria remain bacteria. OK for baraminology but not OK for evolutionism.Joe
March 30, 2015
March
03
Mar
30
30
2015
10:38 AM
10
10
38
AM
PST
Probably group Zachriel reads "order" where it says "organization".Box
March 30, 2015
March
03
Mar
30
30
2015
10:33 AM
10
10
33
AM
PST
Piotr #36
Evolution is not a search for a specific state. Bacteria just go on reproducing; they aren’t trying to hit a small target in a vast config space.
This is exactly the reason why bacteria remain bacteria. If you don't search for how can you find?niwrad
March 30, 2015
March
03
Mar
30
30
2015
10:22 AM
10
10
22
AM
PST
niwrad: There is a fundamental law of physics — 2nd_law_SM — that says that processes spontaneously go in the opposite direction of organization and IDists should not use it against evolutionism, which claims spontaneous organization? Well, calculate the odds. What is the probability that a low pressure zone will form in the atmosphere based on a statistical distribution of microstates?Zachriel
March 30, 2015
March
03
Mar
30
30
2015
10:21 AM
10
10
21
AM
PST
scordova #30
But neither then should IDists use that same sort of conflation to argue against evolution using the 2nd law. IDists should use the LLN or some similar principle since LLN (or some similar idea) is the basis of 2LOT, not the other way around!
I disagree. There is a fundamental law of physics -- 2nd_law_SM -- that says that processes spontaneously go in the opposite direction of organization and IDists should not use it against evolutionism, which claims spontaneous organization? To disprove the conflation (ECA), as I did here, doesn't mean to conflate.niwrad
March 30, 2015
March
03
Mar
30
30
2015
10:16 AM
10
10
16
AM
PST
#33 Niwrad,
The problem is to physically search for a specific physical state in a physical state space composed of 3.2733906078961×10^150 states.
#34 KF,
The odds of getting any one prespecified state be chance tossing of 500 coins is practically nill.
Another fallacy (a straw man). Evolution is not a search for a specific state. Bacteria just go on reproducing; they aren't trying to hit a small target in a vast config space. According to most estimates, there are more than 10^30 individual organisms on Earth at present (most of them prokaryotic), and each of them represents a viable solution of the problem "how to survive and reproduce" (viruses add another order of magnitude, if you count them). They can be divided into millions of extremely varied species, exploiting innumerable survival strategies. During the history of the planet, some ten billion times as many other states and strategies have been tried. So much for organisms that have physically existed. The number of viable organisms that could conceivably have existed but have had no opportunity to evolve (the unrealised possibilities) dwarfs all these numbers.Piotr
March 30, 2015
March
03
Mar
30
30
2015
10:07 AM
10
10
07
AM
PST
kairosfocus, I understand from your statements
we can separately specify functional clusters of configs (e.g. via a description language for parts and how they go together to make a working whole). This deeply constrains acceptable possibilities to a zone T that forms an island of function in a much larger space. The challenge is, by blind chance and necessity driven needle in haystack search that on the gamut of the sol system is as one straw to a cubical haystack comparably thick as our galaxy, to hit such a T in W.
together with
Yes, we can argue over hill-climbing algors, they are about improvements within a zone T, not getting there blindly.
that you are using a definition of T which differs from Dembski's : specifically, in order to avoid having to consider the effects of hill-climbing algorithms, you are viewing T as being the space of minimally selectable functions. I approve. Would you care to calculate P(T|H) for any such minimally selectable function? A calculation that avoided your "bit-counting" fallacy would be a first.DNA_Jock
March 30, 2015
March
03
Mar
30
30
2015
09:38 AM
9
09
38
AM
PST
Piotr: Still busy, but I can snatch a moment for:
The resources of the Universe have nothing to do with it. In the case of a fair coin flipped 500 times, you have 3,273390…e+150 possible outcomes, and the probability of each of them, calculated a priori, is 3,054936…e-151. Does it mean that none of them can happen because “the world is not enough” to store them all, and the probability is below the “Dembski limit”? Not at all. You can toss the coin 1000 times, getting a unique result from a sample space of 10^301 elements.
You know or should full well know that the states are divisible in clusters in a very natural way. The odds of getting any one prespecified state be chance tossing of 500 coins is practically nill. E.g. All H or all T, or the code for the first 72 ASCII characters of this post. Let's pick that up: we can separately specify functional clusters of configs (e.g. via a description language for parts and how they go together to make a working whole). This deeply constrains acceptable possibilities to a zone T that forms an island of function in a much larger space. The challenge is, by blind chance and necessity driven needle in haystack search that on the gamut of the sol system is as one straw to a cubical haystack comparably thick as our galaxy, to hit such a T in W. To all but the ideologically locked in, that is an obviously hopeless task. Yes, we can argue over hill-climbing algors, they are about improvements within a zone T, not getting there blindly. And yet it took only a short time for me, by intelligently directed configuration, to type out those letters. And yes, to do that I have energy flows and mass flows. But that is not all, I have a constructor and intelligently sourced informational control. I could readily arrange 500 coins on a table in the right code. But all the energy of wind, rain, table shaking, earthquakes etc will reliably not achieve the FSCO/I rich zones. The problem is as you full well know or should, is to get to the shores of function by blind search, and the statistical underpinnings of 2LOT show why that will predictably fail. KFkairosfocus
March 30, 2015
March
03
Mar
30
30
2015
09:08 AM
9
09
08
AM
PST
Piotr #29 The problem is not to flip a coin 500 times. If any flip lasts 1 second you ends in 500 seconds. The problem is to physically search for a specific physical state in a physical state space composed of 3.2733906078961x10^150 states. If any state search needs 1 sec you need 3.2733906078961x10^150 sec and the Universe can give you only 10^17 sec. Analogously, in the case of the Cambrian evolution, the physical state space is composed of (10 raised to 1.8 x 10^22) states. If any state search needs 1 sec you need (10 raised to 1.8 x 10^22) sec and the Universe is 10^17 sec. P.S. Your insults to the UD folks are sign that our ID arguments are ok.niwrad
March 30, 2015
March
03
Mar
30
30
2015
09:05 AM
9
09
05
AM
PST
Earth to Piotr= You don't have an argument. Your position doesn't have any supporting evidence because it doesn't have any entailments. Obviously you don't realize any of that or perhaps you do and that is why you attack us so you can distract from the fact that you have nothing. You can't demolish anything, Piotr. To demolish ID and its metrics you need actual evidence not your hopeless misrepresentations. You do a great disservice to humanity with your childish bickerings and inability to support your position.Joe
March 30, 2015
March
03
Mar
30
30
2015
08:19 AM
8
08
19
AM
PST
Zachriel is such a clueless little child. It doesn't understand that its position cannot account for storms...Joe
March 30, 2015
March
03
Mar
30
30
2015
08:15 AM
8
08
15
AM
PST
I wrote
And finally, why this obsession with reducing entropy (both thermodynamic and design space entropy)?
niwrad
Entropy is not MY obsession, it is obsession of evolutionists because they use it to obfuscate.
I was referring to "entropy reduction" as in this statement
the total entropy reduction necessary to ‘achieve’ life on earth.
I pointed out, it's not about entropy reduction, but fine tuning thermodynamic entropy for starters. And even then, that is merely a necessary, not sufficient condition.
The second law is all about using probability at the microscopic level to predict macroscopic change. [...] This statement of the second law, or at least of the fundamental principle behind the second law,
Like what fundamental principle behind the second law, something akin to, uh, LAW OF LARGE NUMBERS (LLN). It's been said before: https://uncommondescent.com/mathematics/the-fundamental-law-of-intelligent-design/ For 500 coins heads, the macrostates are: 50% heads 251 heads/249 tails ... The most likely macrostates are say 50% heads +/- a few standard deviations. Hence by LLN, the system tends to 50% heads, not 100% heads. 2nd law follows similar large number tendencies. IDists should use LLN (law of large numbers) not 2LOT. Why? 2nd law deals with thermodynamic microstates and thermodynamic entropy, whereas design deals with non-thermodynamic microstates and non-thermodynamic entropy. I gave a simple illustration with coins that one should absolutely not equivocate the thermodynamic microstates with design space microstates. The same applies to the design space microstates of biological organsisms.
In a sense the ECA conflates two different planes. This wrong conflation is like to say that a roulette placed inside a refrigerated room can easily output 1 million “black” in a row because its entropy is decreased compared to the outside.
Agreed it's wrong to apply the 2nd law in favor of evolution because of the conflation error. But neither then should IDists use that same sort of conflation to argue against evolution using the 2nd law. IDists should use the LLN or some similar principle since LLN (or some similar idea) is the basis of 2LOT, not the other way around!scordova
March 30, 2015
March
03
Mar
30
30
2015
07:14 AM
7
07
14
AM
PST
#20 niwrad, The argument is bogus. Let me repeat: the sample space is abstract, not real. The resources of the Universe have nothing to do with it. In the case of a fair coin flipped 500 times, you have 3,273390...e+150 possible outcomes, and the probability of each of them, calculated a priori, is 3,054936...e-151. Does it mean that none of them can happen because "the world is not enough" to store them all, and the probability is below the "Dembski limit"? Not at all. You can toss the coin 1000 times, getting a unique result from a sample space of 10^301 elements. Let's imagine a realistic situation. You have one million bacteria in a culture. There is just enough substrate supplied to let one million survive, keeping the size of the population stable. Every bacterium from your culture splits into two "daughters" once a day. However, the culture can't grow to two million: on an average, 50% of the cells will starve to death. Let's now pick a bacterium from generation zero (Gen0) and call her Betty. Betty splits into two daughters, Mary and Ann, who belong to the first generation of descendants (Gen1). Assuming that Mary and Ann are equally fit, they may both die (p=0.25), they may both survive (p=0.25), or one of them may die and the other survive (p=0.5). Therefore, the expected number of surviving descendants left by Betty in the next generation is 0.25x0 + 0.25x2 + 0.5x1 = 1. For Mary, the probability of living long enough to split is 0.5, likewise for Ann, and likewise for all Gen1 bacteria, provided that they are all equally fit. They are all equally "probable" (p=0.5). For the next generation (Gen2), the a priori probability of their survival and reproductive success will equal 0.25. Two years later we have generation number 730, consisting of bacteria whose a priori chances were 2^(-730) = 1,770529...e-220, seventy orders of magnitude below Dembski's "universal limit". In fact, since bacteria can mutate and differentiate, their fitness will not be uniform. Population genetics will tell you what happens if their survival chances are not quite equal. But still, even the best adapted bacteria in Gen730 simply shouldn't exist according to your understanding of probability theory. Good news! We don't need antibiotics. Dembski's probability limit will kill every bug after a few hundred rounds of replication! You UD folks do a great disservice to the ID community with those ridiculuos OPs, showing your abysmal ignorance of basic physics and maths (not to mention real biology). It's not an exaggeration; I mean every word of it. And you make it worse by sticking to your guns doggedly and starting thread after thread based on the same ignorant misconceptions (even after being shown your errors for the nth time). As KF has pointed out, I'm not an expert on thermodynamics and probability theory. I'm a poor ol' linguist, but even I have to know something about physics and statistics to do my job. If a Humanities guy like me can easily demolish your number tricks, just think what a real expert could do if one of them cared to visit your blog.Piotr
March 30, 2015
March
03
Mar
30
30
2015
06:38 AM
6
06
38
AM
PST
kairosfocus: L K Nash used coins as a classic first introduction, where a coin is a one-bit register, so 500 have 2^500 possibilities and 20 would have — as a real toy case, 2^20. A toy model, an analogy. A storm is extraordinarily improbable as due to chance arrangements of microstates, far more unlikely than 500 heads with 500 coins.Zachriel
March 30, 2015
March
03
Mar
30
30
2015
06:19 AM
6
06
19
AM
PST
Z, you are repeating things that have been answered. L K Nash used coins as a classic first introduction, where a coin is a one-bit register, so 500 have 2^500 possibilities and 20 would have -- as a real toy case, 2^20. At that level, the general pattern of the statistics will already be evident, with a sharp peak of configs near 50-50 H-T, in no particular order. 20 bits is about 3 ascii characters, so the threshold of information that is relevant to the design inference is nowhere near to such. KFkairosfocus
March 30, 2015
March
03
Mar
30
30
2015
06:13 AM
6
06
13
AM
PST
niward: Intelligent design has organized all organisms with countless advanced ultra-sophisticated homeostatic cybernetic systems to maintain constant functionalities despite of all internal and external injuries and to contrast the 2nd_law_SM trend. While storms aren't cybernetic, they do exhibit homeostasis and resistance to injury, and "contrast the 2nd_law_SM trend". Indeed, a storm is extraordinarily improbable as due to chance arrangements of microstates.Zachriel
March 30, 2015
March
03
Mar
30
30
2015
06:04 AM
6
06
04
AM
PST
Box #18
The question arises: what prevents the 2nd law to have its way with an organism – as it does, in fact, at the moment of death?
Short answer: o_r_g_a_n_i_z_a_t_i_o_n. Intelligent design has organized all organisms with countless advanced ultra-sophisticated homeostatic cybernetic systems to maintain constant functionalities despite of all internal and external injuries and to contrast the 2nd_law_SM trend. When this organization shuts down the 2nd_law_SM does its destructive job, amen.niwrad
March 30, 2015
March
03
Mar
30
30
2015
05:38 AM
5
05
38
AM
PST
Box: Read this very carefully: any argument featuring storms or snowflakes is not an argument relevant to organization. Ignoring the point won't make it go away. Box: Read this very carefully: any argument featuring storms or snowflakes is not an argument relevant to organization. They are thermodynamic processes, and if your understanding of thermodynamics is contradicted by the facts, then your understanding must be in error. kairosfocus: You will note that, consistently I have spoken to the statistical, microstate underpinnings that have been inextricably connected to statements of 2LOT for 100+ years. A storm has low entropy compared to the surrounding atmosphere. The storm is extraordinarily improbable as due to chance arrangements of microstates. The low entropy is due to work being performed. Box: the 2nd law allows for local decreases in entropy (order). That's right. Box: However such a decrease in entropy is equally destructive for the finely calibrated entropy of life. The 2nd law of thermodynamics allows for high entropy, low entropy, and in-between entropy. Life does not violate the 2nd law of thermodynamics. It takes work to maintain the entropy of living organisms. Box: Ironic how evolutionists always pretend to be blissfully unaware of this “minor” fact. Asked and answered. niwrad: 10^150 is the product of the physical resources of the universe How many microstates do twenty pennies have?Zachriel
March 30, 2015
March
03
Mar
30
30
2015
05:24 AM
5
05
24
AM
PST
Piotr: The fact that a particular individual is “extremely improbable” does not mean that its evolution was impossible. It only means that organisms actually living represent a very, very small subset of organisms that could in theory have evolved, but didn’t.
In #18 I argue that there is not "just" a problem with the coming into existence of an organism, but also with its continued existence. Ironic how evolutionists always pretend to be blissfully unaware of this "minor" fact. IOW the beat - the assault on the finely calibrated entropy (see #18) - goes on also in the extremely unlikely event that an organism is formed.Box
March 30, 2015
March
03
Mar
30
30
2015
04:38 AM
4
04
38
AM
PST
Piotr says, The fact that a particular individual is “extremely improbable” does not mean that its evolution was impossible. It only means that organisms actually living represent a very, very small subset of organisms that could in theory have evolved, but didn’t. I say, Agreed. That is why specification is so important in these discussions. It is not enough for a particular event to improbable it must also be specified. Any given sequence resulting from a coin toss is equally improbable. But 500 heads from a fair coin toss is both highly improbable and highly specified. That is why examples of generic snow flakes and storms are totally irrelevant. They might be improbable but they are not highly specified. On the other hand if I happen on to an event that is both highly specified and highly improbable (like 500 fair coin heads) I naturally will suspect that there has been some intelligent manipulation of the process so as to skirt the implications of the 2nd law. That is what the discussion is about. Now it's possible that there is some hidden unknown algroythym that "naturally" produces heads every time with this one particular fair coin. But the burden of proof is on those who would make that claim. peacefifthmonarchyman
March 30, 2015
March
03
Mar
30
30
2015
04:21 AM
4
04
21
AM
PST
F/N: It seems objectors need to take a timeout to read the now longstanding universal plausibility bound article by Abel: http://www.tbiomed.com/content/6/1/27 KF PS: 500 bits specifies a config space of 3.27 * 10^150 possibilities, and squaring that to get 1,000 bit's worth, 1.07*10^301 possibilities ensures that no search process on the gamut of the observed cosmos can pick 1 in 10^150 of the possibilities. Hence the 500 - 1,000 bit threshold. For practical purposes I suggest fast chem rxns rates of 10^-13 s or 10^-14 s (ionic, fast for organics) and 10^57 atoms for 10^17s in sol sys or 10^80 in the observed cosmos.kairosfocus
March 30, 2015
March
03
Mar
30
30
2015
04:05 AM
4
04
05
AM
PST
Piotr #15 10^150 is the product of the physical resources of the universe: 10^17 [sec] x 10^43 [transitions per sec] x 10^90 [particles] Traversing a physical state space has a cost in terms of physical resources. Independently on how you specify sec, transitions and particles, to reach a specific functional state among (10 raised to 1.8 x 10^22) states exceeds such limit, then it is implausible. See e.g. Bill Dembski, "The design inference", sec.6.5 or David Abel, "The first gene", chap.11.niwrad
March 30, 2015
March
03
Mar
30
30
2015
03:55 AM
3
03
55
AM
PST
New on the market: Darwin Blocks®! Start with our patented Chance-Magic® free-floating lifeless molecules and watch as they spontaneously assemble themselves into a stable, fully functioning self-replicating 3D printer! Keep watching as random print errors produce diverse life forms and an entire stable ecosystem complete with intelligent life and an advanced technological civilization! No instuctions included - none are necessary! Just add sunlight!William J Murray
March 30, 2015
March
03
Mar
30
30
2015
03:40 AM
3
03
40
AM
PST
The organization we find in life can be said to have just the right amount of entropy. The 2nd law poses an enormous thread to this finely calibrated entropy. When discussing life we are talking about low entropy, so in general the 2nd law has the tendency to steer life towards increased entropy (disorder). Some have rightly pointed out that there is an exception to this generality: the 2nd law allows for local decreases in entropy (order). However such a decrease in entropy is equally destructive for the finely calibrated entropy of life. Scordova stated the problem very aptly:
It’s not a matter of having too much or too little entropy, but just the right amounts.
So we have organization with finely calibrated entropy and the 2nd law that wants to mess with it one way or the other. The question arises: what prevents the 2nd law to have its way with an organism - as it does, in fact, at the moment of death? What preserves this finely calibrated entropy exactly for a lifetime and not a moment longer?Box
March 30, 2015
March
03
Mar
30
30
2015
03:15 AM
3
03
15
AM
PST
Piotr:
Chance (random drift) and differential fitness (natural selection) are important aspects of this process.
They are both impotent, Piotr. People invoke Dembski for the simple reason that your position has nothing-> no evidence, no entailments, no predictions, no hypotheses and no models. You have nothing. Grow up and deal with itJoe
March 30, 2015
March
03
Mar
30
30
2015
03:10 AM
3
03
10
AM
PST
1 19 20 21 22

Leave a Reply