Uncommon Descent Serving The Intelligent Design Community

A Designed Object’s Entropy Must Increase for Its Design Complexity to Increase – Part 1

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The common belief is that adding disorder to a designed object will destroy the design (like a tornado passing through a city, to paraphrase Hoyle). Now if increasing entropy implies increasing disorder, creationists will often reason that “increasing entropy of an object will tend to destroy its design”. This essay will argue mathematically that this popular notion among creationists is wrong.

The correct conception of these matters is far more nuanced and almost the opposite of (but not quite) what many creationists and IDists believe. Here is the more correct view of entropy’s relation to design (be it man-made or otherwise):

1. increasing entropy can increase the capacity for disorder, but it doesn’t necessitate disorder

2. increasing an object’s capacity for disorder doesn’t imply that the object will immediately become more disordered

3. increasing entropy in a physical object is a necessary (but not sufficient) condition for increasing the complexity of the design

4. contrary to popular belief, a complex design is a high entropy design, not a low entropy design. The complex organization of a complex design is made possible (and simultaneously improbable) by the high entropy the object contains.

5. without entropy there is no design

If there is one key point it is: Entropy makes design possible but simultaneously improbable. And that is the nuance that many on both sides of the ID/Creation/Evolution controversy seem to miss.

The notion of entropy is foundational to physics, engineering, information theory and ID. These essays are written to provide a discussion on the topic of entropy and its relationship to other concepts such as uncertainty, probability, microstates, and disorder. Much of what is said will go against popular understanding, but the aim is to make these topics clearer. Some of the math will be in a substantially simplified form, so apologies in advance to the formalists out there.

Entropy may refer to:

1. Thermodynamic (Statistical Mechanics) entropy – measured in Joules/Kelvin, dimensionless units, degrees of freedom, or (if need be) bits

2. Shannon entropy – measured in bits or dimensionless units

3. Algorithmic entropy or Kolmogorov complexity – measured also in bits, but deals with the compactness of a representation. A file that can be compressed substantially has low algorithmic entropy, whereas files which can’t be compressed evidence high algorithmic entropy (Kolmogorov complexity). Both Shannon entropy and algorithmic entropies are within the realm of information theory, but by default, unless otherwise stated, most people associate Shannon entropy as the entropy in information theory.

4. disorder in the popular sense – no real units assigned, often not precise enough to be of scientific or engineering use. I’ll argue the term “disorder” is a misleading way to conceptualize entropy. Unfortunately, the word “disorder” is used even in university science books. I will argue mathematically why this is so…

The reason the word entropy is used in the disciplines of Thermodynamics, Statistical Mechanics and Information Theory is that there are strong mathematical analogies. The evolution of the notion of entropy began with Clausius who also coined the term for thermodynamics, then Boltzmann and Gibbs related Clausius’s notions of entropy to Newtonian (Classical) Mechanics, then Shannon took Boltzmann’s math and adapted it to information theory, and then Landauer brought things back full circle by tying thermodynamics to information theory.

How entropy became equated with disorder, I do not know, but the purpose of these essays is to walk through actual calculations of entropy and allow the reader to decide for himself whether disorder can be equated with entropy. My personal view is that Shannon entropy and Thermodynamic entropy cannot be equated with disorder, even though the lesser-known algorithmic entropy can. So in general entropy should not be equated with disorder. Further, the problem of organization (which goes beyond simple notions of order and entropy) needs a little more exploration. Organization sort of stands out as a quality that seems difficult to assign numbers to.

The calculations that follow are to give an illustration how I arrived at some my conclusions.

First I begin with calculating Shannon entropy for simple cases. Thermodynamic entropy will be covered in the Part II.

Bill Dembski actually alludes to Shannon entropy in his latest offering on Conservation of Information Made Simple

In the information-theory literature, information is usually characterized as the negative logarithm to the base two of a probability (or some logarithmic average of probabilities, often referred to as entropy).

William Dembski
Conservation of Information Made Simple

To elaborate on what Bill said, if we have a fair coin, it can exist in two microstates: heads (call it microstate 1) or tails (call it microstate 2).

After a coin flip, the probability of the coin emerging in microstate 1 (heads) is 1/2. Similarly the probability of the coin emerging in microstate 2 (tails) is 1/2. So let me tediously summarize the facts:

N = Ω(N) = Ω = Number of microstates of a 1-coin system = 2

x1 = microstate 1 = heads
x2 = microstate 2 = tails

P(x1) = P(microstate 1)= P(heads) = probability of heads = 1/2
P(x2) = P(microstate 2)= P(tails) = probability of tails = 1/2

Here is the process for calculating the Shannon Entropy of a 1-coin information system starting with Shannon’s famous formula:

where I is the Shannon entropy (or measure of information).

This method seems a rather torturous way to calculate the Shannon entropy of a single coin. A slightly simpler method exists if we take advantage of the fact that each microstate of the coin (heads or tails) is equiprobable, and thus conforms to the fundamental postulate of statistical mechanics, and thus we can calculate the number of bits by simply taking the logarithm of the number of microstates as is done in statistical mechanics.

Now compare this equation of the Shannon entropy in information theory

to Boltzmann entropy from statistical mechanics and thermodynamics

and even more so using different units whereby kb=1

The similarities are not an accident. Shannon’s ideas of information theory are a descendant of Boltzmann’s ideas from statistical mechanics and thermodynamics.

To explore Shannon entropy further, let us suppose we have a system of 3 distinct coins. The Shannon entropy relates the amount of information that will be gained by observing the collective state (microstate) of the 3 coins.

First we have to compute the number of microstates or ways the system of coins can be configured. I will lay them out specifically.

microstate 1 = H H H
microstate 2 = H H T
microstate 3 = H T H
microstate 4 = H T T
microstate 5 = T H H
microstate 6 = T H T
microstate 7 = T T H
microstate 8 = T T T

N = Ω(N) = Ω = Number of microstates of a 3-coin system = 8

So there are 8 microstates or outcomes the system can realize. The Shannon entropy can be calculated in the torturous way:

or simply taking the logarithm of the number of microstates:

It can be shown that for the Shannon entropy of a system of N distinct coins is equal to N bits. That is, a system with 1 coin has 1 bit of Shannon entropy, a system with 2 coins has 2 bits of Shannon entropy, a system of 3 coins has 3 bits of Shannon entropy, etc.

Notice, the more microstates there are, the more uncertainty exists that the system will be found in any given microstate. Equivalently, the more microstates there are, the more improbable the system will be found in a given microstate. Hence, sometimes entropy is described in terms of improbability or uncertainty or unpredictability. But we must be careful here, uncertainty is not the same thing as disorder. That is subtle but important distinction.

So what is the Shannon Entropy of a system of 500 distinct coins? Answer: 500 bits, or the Universal Probability Bound.

By way of extension, if we wanted to build an operating system like Windows-7 that requires gigabits of storage, we would require the computer memory to contain gigabits of Shannon entropy. This illustrates the principle that more complex designs require larger Shannon entropy to support the design. It cannot be otherwise. Design requires the presence of entropy, not absence of it.

Suppose we found that a system of 500 coins were all heads, what is the Shannon entropy of this 500-coin system? Answer: 500 bits. No matter what configuration the system is in, whether ordered (like all heads) or disordered, the Shannon entropy remains the same.

Now suppose a small tornado went through the room where the 500 coins resided (with all heads before the tornado), what is the Shannon entropy after the tornado? Same as before, 500-bits! What may arguably change is the algorithmic entropy (Kolmogorov complexity). The algorithmic entropy may go up, which simply means we can’t represent the configuration of the coins in a compact sort of way like saying “all heads” or in the Kleene notation as H*.

Amusingly, if in the aftermath of the tornado’s rampage, the room got cooler, the thermodynamic entropy of the coins would actually go down! Hence the order or disorder of the coins is independent not only of the Shannon entropy but also the thermodynamic entropy.

Let me summarize the before and after of the tornado going through the room with the 500 coins:

BEFORE : 500 coins all heads, Temperature 80 degrees
Shannon Entropy : 500 bits
Algorithmic Entropy (Kolmogorov complexity): low
Thermodynamic Entropy : some finite starting value

AFTER : 500 coins disordered
Shannon Entropy : 500 bits
Algorithmic Entropy (Kolmogorov complexity): high
Thermodynamic Entropy : lower if the temperature is lower, higher if the temperature is higher

Now to help disentangle concepts a little further consider three 3 computer files:

File_A : 1 gigabit of binary numbers randomly generated
File_B : 1 gigabit of all 1’s
File_C : 1 gigabit encrypted JPEG

Here are the characteristics of each file:

File_A : 1 gigabit of binary numbers randomly generated
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov Complexity): high
Thermodynamic Entropy: N/A
Organizational characteristics: highly disorganized
inference : not designed

File_B : 1 gigabit of all 1’s
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov Complexity): low
Thermodynamic Entropy: N/A
Organizational characteristics: highly organized
inference : designed (with qualification, see note below)

File_C : 1 gigabit encrypted JPEG
Shannon Entropy: 1 gigabit
Algorithmic Entropy (Kolmogorov complexity): high
Thermodynamic Entropy: N/A
Organizational characteristics: highly organized
inference : extremely designed

Notice, one cannot ascribe high levels of improbable design based on the Shannon entropy or algorithmic entropy without some qualification. Existence of improbable design depends on the existence of high Shannon entropy, but is somewhat independent of algorithmic entropy. Further, to my knowledge, there is not really a metric for organization that is separate from Kolmogorov complexity, but this definition needs a little more exploration and is beyond my knowledge base.

Only in rare cases will high Shannon entropy and low algorithmic entropy (Kolmogorov complexity) result in a design inference. One such example is 500 coins all heads. The general method to infer design (including man-made designs), is that the object:

1. has High Shannon Entropy (high improbability)
2. conforms to an independent (non-postdictive) specification

In contrast to the design of coins being all heads where the Shannon entropy is high but the algorithmic entropy is low, in cases like software or encrypted JPEG files, the design exists in an object that has both high Shannon entropy and high algorithmic entropy. Hence, the issues of entropy are surely nuanced, but on balance entropy is good for design, not always bad for it. In fact, if an object evidences low Shannon entropy, we will not be able to infer design reliably.

The reader might be disturbed at my final conclusion in as much as it grates against popular notions of entropy and creationist notions of entropy. But well, I’m no stranger to this controversy. I explored Shannon entropy in this thread because it is conceptually easier than its ancestor concept of thermodynamic entropy.

In the Part II (which will take a long time to write) I’ll explore thermodynamic entropy and its relationship (or lack thereof) to intelligent design. But in brief, a parallel situation often arises: the more complex a design, the higher its thermodynamic entropy. Why? The simple reason is that more complex designs involve more parts (molecules) and more molecules in general imply higher thermodynamic (as well as Shannon) entropy. So the question of Earth being an open system is a bit beside the point since entropy is essential for intelligent designs to exist in the first place.

[UPDATE: the sequel to this thread is in Part 2]

Acknowledgements (both supporters and critics):

1. Elizabeth Liddle for hosting my discussions on the 2nd Law at TheSkepticalZone

2. physicist Olegt who offered generous amounts of time in plugging the holes in my knowledge, particularly regarding the Liouville Theorem and Configurational Entropy

3. retired physicist Mike Elzinga for his pedagogical examples and historic anecdotes. HT: the relationship of more weight to more entropy

4. An un-named theoretical physicist who spent many hours teaching his students the principles of Statistical Mechanics and Thermodynamics

5. physicists Andy Jones and Rob Sheldon

6. Neil Rickert for helping me with Latex

7. Several others that have gone unnamed

NOTE:
[UPDATE and correction: gpuccio was kind enough to point out that in the case of File_B, the design inference isn’t necessarily warranted. It’s possible an accident or programming error or some other reason could make all the bits 1. It would only be designed if that was the designer’s intention.]

[UPDATE 9/7/2012]
Boltzmann

“In order to explain the fact that the calculations based on this assumption [“…that by far the largest number of possible states have the characteristic properties of the Maxwell distribution…”] correspond to actually observable processes, one must assume that an enormously complicated mechanical system represents a good picture of the world, and that all or at least most of the parts of it surrounding us are initially in a very ordered — and therefore very improbable — state. When this is the case, then whenever two of more small parts of it come into interaction with each other, the system formed by these parts is also initially in an ordered state and when left to itself it rapidly proceeds to the disordered most probable state.” (Final paragraph of #87, p. 443.)

That slight, innocent paragraph of a sincere man — but before modern understanding of q(rev)/T via knowledge of molecular behavior (Boltzmann believed that molecules perhaps could occupy only an infinitesimal volume of space), or quantum mechanics, or the Third Law — that paragraph and its similar nearby words are the foundation of all dependence on “entropy is a measure of disorder”. Because of it, uncountable thousands of scientists and non-scientists have spent endless hours in thought and argument involving ‘disorder’and entropy in the past century. Apparently never having read its astonishingly overly-simplistic basis, they believed that somewhere there was some profound base. Somewhere. There isn’t. Boltzmann was the source and no one bothered to challenge him. Why should they?

Boltzmann’s concept of entropy change was accepted for a century primarily because skilled physicists and thermodynamicists focused on the fascinating relationships and powerful theoretical and practical conclusions arising from entropy’s relation to the behavior of matter. They were not concerned with conceptual, non-mathematical answers to the question, “What is entropy, really?” that their students occasionally had the courage to ask. Their response, because it was what had been taught to them, was “Learn how to calculate changes in entropy. Then you will understand what entropy ‘really is’.”

There is no basis in physical science for interpreting entropy change as involving order and disorder.

Comments
SC: Lets start with
S = k*log W, per Boltzmann
where W is the number of ways that mass and/or energy at ultra-microscopic level may be arranged, consistent with a given Macroscopic [lab-level observable] state. That constraint is crucial and brings out a key subtlety in the challenge to create functionally specific organisation on complex [multi-part] systems through forces of blind chance and mechanical necessity. FSCO/I is generally deeply isolated in the space of raw configurational possibilities, and is not normally created by nature working freely. Nature, working freely, on the gamut of our solar system or of the observed cosmos, will blindly sample the space from some plausible, typically arbitrary initial condition, and thereafter it will undergo a partly blind random walk, and there may be mechanical dynamics at work that will impress a certain orderly motion, or the like. (Think about molecules in a large parcel of air participating in wind and weather systems. The temperature is a metric of avg random energy per degree of freedom of relevant particles, usually translational, rotational and vibrational. At the same time, the body of air as a whole is drifting along in the wind that may reflect planetary scale convection.) Passing on to Shannon's entropy in the information context (and noting Jaynes et al on the informational view of thermodynamics that I do not see adequately reflected in your remarks above -- there are schools of thought here, cf. my note here on), what Shannon was capturing is average info per symbol transmitted in the case of non equiprobable symbols; the normal state of codes. This turns out to link to the Gibbs formulation of entropy you cite. And, I strongly suggest you look at Harry S Robertson's Statistical Thermophysics Ch 1 (Prentice) to see what it seems from appearances that your interlocutors have not been telling you. That is, there is a vigorous second school of thought within physics on stat thermo-d, that bridges to Shannon's info theory. Wikipedia bears witness to the impact of this school of thought:
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
So, when we see the value of H in terms of uncommunicated micro- level information based on lab observable state, we see that entropy, traditionally understood per stat mech [degrees of micro-level freedom], is measuring the macro-micro info-gap [MmIG], NOT the info we have in hand per macro-observation. The subtlety this leads to is that when we see a living unicellular species of type x, providing we know the genome, through lab level observability, we know a lot about the specific molecular states from a lab level observation. The MmIG is a lot smaller, as there is a sharp constraint on possible molecular level configs, once we have a living organism in hand. When it dies, the active informationally directed maintenance of such ceases, and spontaneous changes take over. The highly empirically reliable result is well known: decay and breakdown to simpler component molecules. We also know that in the period of historic observation and record -- back to the days of early microscopy 350 years back, this is passed on from generation to generation by algorithmic processes. Such a system is in a programmed, highly constrained state governed by gated encapsulation, metabolic automata that manage an organised flow-through of energy and materials [much of this in the form of assembled smart polymers such as proteins] backed up by a von Neumann self-replicator [vNSR]. We can also infer on this pattern right back to the origins of cell based life, on the relevant macro-traces of such life. So, how do we transition from Darwin's warm pond with salts [or the equivalent] state, to the living cell state? The dominant OOL school, under the methodological naturalism imposition, poses a claimed chem evo process of spontaneous cumulative change. This runs right into the problem of accessing deeply isolated configs spontaneously. For, sampling theory and common sense alike tell us that pond state -- due to the overwhelming bulk of configs and some very adverse chemical reaction equilibria overcome in living systems by gating, encapsulation and internal functional organisation that uses coded data and a steady flow of ATP energy battery molecules to drive algorithmic processes -- will be dominant over spontaneous emergence at organised cell states (or any reasonable intermediates). There is but one empirically confirmed means of getting to FSCO/I, namely design. In short, on evidence, the info-gap between pond state and cell state, per the value of FSCO/I as sign, is best explained as being bridged by design that feeds in the missing info and through intelligently directed organising work [IDOW] creates in this case a self replicating micro-level molecular nanotech factory. That self replication also uses an information and organisation-rich vNSR, and allows a domination of the situation by a new order of entity, the living cell. So, it is vital for us to understand at the outset of discussion that the entropy in a thermodynamic system is a metric of missing information on the microstate, given the number of microstate possibilities consistent with the macro-observable state. That is, entropy measures the MmIG. Where also, the living cell is in a macro-observable state that initially and from generation to generation [via vNSR in algorithmically controlled action on coded information], locks down the number of possible states drastically relative to pond state. The debate on OOL, then is about whether it is a credible argument on observed evidence in the here and now, for pond state, via nature operating freely and without IDOW, to go to cell-state. (We know that IDOW routinely creates FSCO/I, a dominant characteristic of living cells.) A common argument is that raw injection of energy suffices to bridge the info-gap without IDOW, as the energy flow and materials flows allow escape from "entropy increases in isolated systems." What advocates of this do not usually disclose, is that raw injection of energy tends to go to heat, i.e. to dramatic rise in the number of possible configs, given the combinational possibilities of so many lumps of energy dispersed across so many mass-particles. That is, MmIG will strongly tend to RISE on heating. Where also, for instance, spontaneously ordered systems like hurricanes are not based on FSCO/I, but instead on the mechanical necessities of Coriolis forces acting on large masses of air moving under convection on a rotating spherical body. (Cf my discussion here on, remember, I came to design theory by way of examination of thermodynamics-linked issues. We need to understand and visualise step by step what is going on behind the curtain of serried ranks of algebraic, symbolic expressions and forays into calculus and partial differential equations etc. Otherwise, we are liable to miss the forest for the trees. Or, the old Wizard of Oz can lead us astray.) A good picture of the challenge was posed by Shapiro in Sci AM, in challenging the dominant genes first school of thought, in words that also apply to his own metabolism first thinking:
RNA's building blocks, nucleotides, are complex substances as organic molecules go. They each contain a sugar, a phosphate and one of four nitrogen-containing bases as sub-subunits. Thus, each RNA nucleotide contains 9 or 10 carbon atoms, numerous nitrogen and oxygen atoms and the phosphate group, all connected in a precise three-dimensional pattern. Many alternative ways exist for making those connections, yielding thousands of plausible nucleotides that could readily join in place of the standard ones but that are not represented in RNA. That number is itself dwarfed by the hundreds of thousands to millions of stable organic molecules of similar size that are not nucleotides [--> and he goes on, with the issue of assembling component monomers into functional polymers and organising them into working structures lurking in the background] . . . . [--> Then, he flourishes, on the notion of getting organisation without IDOW, merely on opening up the system:] The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck.
Orgel's reply in a post-humus paper, is equally revealing on the escape from IDOW problem:
If complex cycles analogous to metabolic cycles could have operated on the primitive Earth, before the appearance of enzymes or other informational polymers, many of the obstacles to the construction of a plausible scenario for the origin of life would disappear . . . Could a nonenzymatic “metabolic cycle” have made such compounds available in sufficient purity to facilitate the appearance of a replicating informational polymer? It must be recognized that assessment of the feasibility of any particular proposed prebiotic cycle must depend on arguments about chemical plausibility, rather than on a decision about logical possibility . . . few would believe that any assembly of minerals on the primitive Earth is likely to have promoted these syntheses in significant yield. Each proposed metabolic cycle, therefore, must be evaluated in terms of the efficiencies and specificities that would be required of its hypothetical catalysts in order for the cycle to persist. Then arguments based on experimental evidence or chemical plausibility can be used to assess the likelihood that a family of catalysts that is adequate for maintaining the cycle could have existed on the primitive Earth . . . . Why should one believe that an ensemble of minerals that are capable of catalyzing each of the many steps of [for instance] the reverse citric acid cycle was present anywhere on the primitive Earth [8], or that the cycle mysteriously organized itself topographically on a metal sulfide surface [6]? The lack of a supporting background in chemistry is even more evident in proposals that metabolic cycles can evolve to “life-like” complexity. The most serious challenge to proponents of metabolic cycle theories—the problems presented by the lack of specificity of most nonenzymatic catalysts—has, in general, not been appreciated. If it has, it has been ignored. Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own . . . . The prebiotic syntheses that have been investigated experimentally almost always lead to the formation of complex mixtures. Proposed polymer replication schemes are unlikely to succeed except with reasonably pure input monomers. No solution of the origin-of-life problem will be possible until the gap between the two kinds of chemistry is closed. Simplification of product mixtures through the self-organization of organic reaction sequences, whether cyclic or not, would help enormously, as would the discovery of very simple replicating polymers. However, solutions offered by supporters of geneticist or metabolist scenarios that are dependent on “if pigs could fly” hypothetical chemistry are unlikely to help.
So, we have to pull back the curtain and make sure we first understand that the sense in which entropy is linked to information in a thermodynamics context is that we are measuring missing info on the micro-state given the macro-state. So, we should not allow the similarity of mathematics to lead us to think that IDOW is irrelevant to OOL, once a system is opened up to energy and mass flows. In fact, given the delicacy and unfavourable kinetics and equilibria involved -- notice all those catalysing enzymes and ATP energy battery molecules in life? -- the challenge of IDOW is the elephant standing in the middle of the room that ever so many are desperate not to speak bout. KFkairosfocus
September 5, 2012
September
09
Sep
5
05
2012
03:08 AM
3
03
08
AM
PDT
but on balance entropy is good for design, not always bad for it.
One problem, as with neo-Darwinists, you don't have any physical example of 'not always bad'. i.e. you have not one molecular machine or one functional protein coming about by purely material processes. But the IDists and creationists have countless examples of purely material processes degrading as such.bornagain77
September 5, 2012
September
09
Sep
5
05
2012
02:04 AM
2
02
04
AM
PDT
1 2 3

Leave a Reply