Uncommon Descent Serving The Intelligent Design Community

“No process can result in a net gain of information” underlies 2LoT

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Further to Granville Sewell‘s work on the 2nd Law in an open system, here is Duncan & Semura profound insight into how loss of information is the foundation for the 2nd Law of Thermodynamics. This appears foundational to the understanding and development and testing of origin theories and consequent change in physical and biotic systems. ———————

The key insight here is that when one attempts to derive the second law without any reference to information, a step which can be described as information loss always makes its way into the derivation by some sleight of hand. An information-losing approximation is necessary, and adds essentially new physics into the model which is outside the realm of energy dynamics.


4. Summary of the Perspective

1) Energy and information dynamics are independent but coupled (see Figure 1).
2) The second law of thermodynamic is not reducible purely to mechanics (classical or quantum); it is part of information dynamics. That is, the second law exists because there is a restriction applying to information that is outside of and additional to the laws of classical or quantum mechanics.
3) The foundational principle underlying the second law can then be expressed succinctly in terms of information loss:
“No process can result in a net gain of information.”
In other words, the uncertainty about the detailed state of a system cannot decrease over time – uncertainty increases or stays the same.

The information loss perspective provides a natural framework for incorporating extensions and apparent challenges to the second law. The principle that “no process can result in a net gain of information” appears to be deeper and more universal than standard formulations of the second law.

. . . the information-loss framework offers the possibility of discovering new mechanisms of information storage through the analysis of second law challenges, deepening our understanding both of the second law and of information dynamics.

See full paper: Information Loss as a Foundational Principle for the Second Law of Thermodynamics, T. L. Duncan, J. S. Semura
Foundations of Physics, Foundations of Physics, Volume 37, Issue 12, pp.1767-1773, DOI 10.1007/s10701-007-9159-z
This builds on Duncan & Semura’s first paper:
The Deep Physics Behind the Second Law: Information and Energy As Independent Forms of Bookkeeping, T. Duncan, J. Semura, Entropy 2004, 6, 21-29, arXiv:cond-mat/0501014v1

Comments
Okay: Finally able to access UD today. TVR, re: w (as in S = k log w) is not fine tuned enough a parameter to pick up all the subtleties of complex organization 1 --> Last time I checked w [strictly omega, but I understand Boltz used W . . .] is the number of microstates compatible with a macrostate, i.e it is the statistical weight of a given macrostate. (Onlookers: Macrostates in effect are coarse-/ lab- scale observable states, which usually leave underlying microstates to vary across a wide array that can in principle be counted or at least estimated.) 2 --> When we look at micro-level configurations of matter and energy, we can indeed define a wider config space in which the relevant functional and non-functional macrostates occur with their relevant weights. 3 --> It is intuitively obvious that non-functional states overwhelm functional ones, as is a commonplace. 4 --> Doubtless, you will be familiar with how the statistical form of the 2nd law is set up through comparative weight counts driving probabilities of states, and indeed how s = k ln w is set up by partitioning a body with w states into 2 parts with microstates w1 and w2, leading to a log measure. [Onlookers, an excellent source is Nash's Elements of Stat Thermo-D. I only wish I had had a little less Physics arrogance when I was an undergrad and had been willing to learn form a Chemist!] 5 --> Indeed, the point on likelihood of being in different configs at random in effect, is the root of Sir Fred Hoyle's 747 in a junkyard remartk which I took down to semi molecular scale in my always linked point 6 appendix a. [Onlookers observe how studiously TVR avoids addressing this. And FYI TVR, the rough cell splitting and counting I do there is similar to that done by one certain Josiah Willard Gibbs.] 6 --> By looking at my discussion as just linked and how it interacts with Thaxton et al's TMLO chs 7 - 9, you will see how Brillouin's negentropy view and metric of information can be used to analyse the movement from [a] a scattered at random state to [b] a clumped at random state to [c] a functional state. In each case, the number of ways matter and relevant energy are configured becomes increasingly confined as we move from one state to the next so the number of available configurations from the overall config space falls. 7 --> Thus, w falls twice in succession, and so entropy falls/ Brillouin information metric [recall there are several valid metrics of information] rises as work is done on the originally chaotic matter in TBO's prebiotic soup or my vat of pre-microjet liquid. 8 --> In effect we are undoing diffusion, which is an entropy-increasing process, through increasing volumetric confinement of particles: in my thought expt, from 1 cu m to about 1 cu cm then to a much smaller possibility of order 10-6 cu m for individual parts to get to a functional config. I used 1 micron cubed cells to do rough state counts, to make the point [the cells are too coarse but that is good enough for the purpose in view]. So, sorry, TVR: I HAVE done the counts. And, TBO in TMLO did the thermodynamics oh about 25 years back, and Bradley has updated since in more modern terms using Cytochrome C. The message is in the end simple, though too often hard to accept: functioning configs are very rare and isolated [even if clustered as islands] in the overall config space available to monomers comprising life functional molecules then the living cells that use these molecules to function. So much so that there is a considerably complex mechanism to form cells and make them work, based on algorithmic codes. For DNA, to pick just one entity, that is to be found in strings from about 300 - 500 000 up. A 300k base pair molecule has an available clustered config space (as the info is not stored in the chemistry of chaining) of 4^300k ~ 9.94*10^180,617. The number of quantum states of our cosmos across its lifespan is about 10^150, i.e the number of states it can access. If we allow islands of functionality of 10^150 states, and we allow 10^1000 of them [vastly more than the number of individual dna based life forms that could ever exist in our observed cosmos] there would be 10^1150 such states, in 10^1000 islands of 10^150 states. With a space of 9.9 * 10^180,617 config states, the islands of function will be vastly inaccessible to any random walk initiating process. And, that is assuming we can naturally synthesise the monomers, separate the chiralities and chain to the required lengths. Sorry, I know empirically that intelligent agents, using understanding and algorithmic processes can create entities that are that isolated in such config spaces. So if offered the choice of believing that chance + necessity got us to life from some prebiotic soup or other, or one or more intelligent agent[s] did it, the choice is obvious, save to those who are committed to the impossibility of such agents at the required place and time. As a matter of fact, that we are so fearfully and wonderfully made is itself strong testimony to such agent[s] at the time and place in quesiton, once one has not closed mindedly begged the question. For, we have a known capable process vs a known incapable process (by overwhelming improbability). Teh only out for the latter, would be to postiulate an unobserved quasi-infinite wider cosmos as a whole embedding a similarly quasi-infinite number of sub cosmi in which the physics and chemistry etc very to give enough probabilistic resources to get the odds down. It is therefore no surprise to see this naked ad hoc metaphysical resort on the part of an increasing number of evolutionary materialists. but, we do observe just one cosmos, and it is finite in time, extent and matter, so far as we can observe. So somebody is running away from the empirical evidence when it no longer fits his views and preferences, into ad hoc metaphysical speculation. That means that the effective choice is between a quasi-infinite unobserved wider cosmos as a necessary being and an agent capable of cosmogenesis and origination of life. Which is the better explanation is not that hard to choose . . . ; - ) GEM of TKIkairosfocus
March 13, 2008
March
03
Mar
13
13
2008
07:33 AM
7
07
33
AM
PDT
Sorry to stretch your patience DLH. I’ll take another look at Granville’s work, but let me just summarize my current position: . 1. w (as in S = k log w) is not fine tuned enough a parameter to pick up all the subtleties of complex organization. As far as w is concerned organisms are structures more disordered than crystals but less is disordered than gases; that’s all that w ‘knows’ about. To ‘w’ organisms look like ‘fancy crystals’. . 2. No one is suggesting that entropy creates anything. It is only a bulk measure that in isolated systems increases with time. Yes, of course, advancing entropy ultimately disrupts all forms of order, crystals as well as humans, but fine distinctions of structure are not part of the remit of a gross a quantity that tells us very little about the details as the system runs down; the run down entailed by dw/dt greater than zero doesn’t eliminate local increases of order. And remember, as far as entropy is concerned organisms are just ordered lumps of matter – it doesn’t measure our intuitive notion of organized complexity or ‘mutual information’. . 3. Ovens create a temperature gradient and a temperature gradient entails an order that can quickly be converted into organized forms of energy, like kinetic or potential. (e.g. weather systems). . 4. DLH, unless you are successfully reading the subtext, I’m not, repeat not, repeat not, making here any claims in this particular connection about ‘self organization’ or even whether evolution has actually occurred or not (IC might well stop the whole show). All I am saying is that the second law, as it stands, is too blunt an instrument to eliminate evolution. I would be saying these things even if I became convinced of ID. . 5. Your examples of information media cookery (must try it when I fancy a byte or two) may elicit an intuitive gut reaction about the implausibility of evolution, but I’m not, repeat not, here referring to the likelihood of evolution. I’m only commenting on the soundness of using the second law as an evolution stopper. In any case there are gut reactions and gut reactions. The SETI people look into the sky and say “Look at all those stars, life must be out there somewhere!” Gut reactions may convey valid information but they must be treated with caution. . Once again sorry for stretching everyone’s patience. (Even Gpuccio has given up on me!) I’ll have another look at Granville’s work and see where I am going wrong. It is always possible that Granville is working with an upgraded concept of entropy that picks out the more subtle features of complex organization/mutual information. In this connection what we require is a quantity, call it ‘X’ (perhaps looking a bit like mutual information) measuring organized complexity, that peaks for organisms, somewhere between w = 0 and w = maximum. But that is just a shot from the hip, so don’t take it too seriously.Timothy V Reeves
March 13, 2008
March
03
Mar
13
13
2008
05:50 AM
5
05
50
AM
PDT
TMV Try thinking through what is required to read/write information onto your RAM, hard drive, or CD. Contrast that with how a crystal is formed. For information to be recorded, the material must be able to be physically changed to reflect a 0 or 1; e.g. high/low magnetism, or optical reflection, or charge etc. There are no physical laws or "self organization" that "require" any specific coded message. To the contrary, any "self organization" prevents coded information from being recorded. What will happen if you put your CD or hard drive or RAM in the oven and "bake" at 450F for 3 hours. Will that process record your cookbook since the food was previously cooked in that oven? Or will it destroy any recipes you might have had recorded on that storage media? Entropy will destroy coded information. Entropy cannot create coded information. That is the practical colloquial heart of the issue as to why entropy is degrading of biotic systems, not "creating" them. Thus entropy is a show stopper for evolution when recognized for this difference between coded information vs self organization. The rest is working out math details and explanations, probabilities etc. Look again at Granville's work in light of coded information vs self "ordering" by natural law. An encyclopedia does not come about by random mutation and natural selection.DLH
March 12, 2008
March
03
Mar
12
12
2008
11:29 AM
11
11
29
AM
PDT
Thanks Kairosfocus for the replies. . If I understand you correctly you affirm IC as an evolution stopper when you say, for example: . The key relevance of this to the design inference — and BTW, this is not the same as the Divine inference, Tim — is that relevant configurations of the systems of interest yield huge config spaces, with islands of relevant functionality being exceedingly isolated. So, when searches based on chance + necessity only engage in in effect random walks from initial arbitrary conditions, they most likely by overwhelming statistical weight of non functional macrostates, will never find the minimally functional shores of an island of functionality. . I’ll certainly be interested to know if you have made analytical progress in the representation of those huge configuration spaces, and can prove that the functionality we see in organic structures actually represents isolated regions of functionality. However, as I personally haven’t developed the analytical where-with-all to analyze these spaces and as this comment thread is ostensively about the second law (and not IC) I thought it more appropriate here to investigate the much more analytically amenable second law, and try to establish if there really is, as Granville’s seems to be claiming, an evolution stopper here. Therefore, it is the second law I would like to focus on in this context. . Of the second law you yourself say: . So, that local increases of order or even organisation do not violate in themselves the classical form of the 2nd law of thermodynamics is a red herring. Indeed, we see such things happening all the time, and our economy is based on it. . What are you saying here? Are you saying that the second law is not an evolution stopper after all and that perhaps we should move onto IC? - a point (if that is what you are saying) with which I am inclined to agree. . The problem with the second law is that it deals only with a rather crude measure w (where S = k log w), although as your quotes affirm this is a ‘perfectly objective quantity’. Crystals have a much lower w and therefore a much higher order than organisms for the simple reason that there are far more ways of being an organism than a crystal, although, of course, organisms are still highly ordered in a relative sense. . But organisms are not just highly ordered they also have another quality called ‘complexity’, a quality that is difficult characterize and pin down quantitatively. I myself have been aware of the question of ‘complexity’ (as opposed to simple order) since the seventies, which puts me back well into your 25-35 year range! Perhaps something like ‘mutual information’ or some other quantity might capture the essence of organized complexity, but the fact is that the second law deals only in w, plain and simple, and as such doesn’t, as far as I can see, act as a bar to evolution …. Unless, of course, someone has developed a much more sophisticated rendering of the second law that is sensitive to things like ‘complexity’, ‘morphospace’, ‘functional isolation’ etc. I would like to know if anyone has. . I want to know the answer to this question: Does the second law as it currently stands (or at least as I understand it) constitute an evolution stopper by itself? (In this connection I’ll have a look at your appendix) I don’t want to look as though I am deliberately obstructive, awkward or a machiavellian character with an ulterior agenda of motives, but that is probably a hazard I’ll have to face because as I have said before I’ve landed in war zone where suspicions are rampant and emotions are running high. In such a zone one is carefully and suspiciously read for one’s true allegiance.Timothy V Reeves
March 12, 2008
March
03
Mar
12
12
2008
10:19 AM
10
10
19
AM
PDT
TVR, DLH et al and Onlookers: A further footnote on the second law of thermodynamics and its relevance to the design inference. For, I find the following from TVR, no. 24 supra, interesting; indeed, inadvertently revealing:
That the conjectured evolution of life represents a local increase in order is not in and of itself enough to rule out evolution – as we have seen crystal formation entails a local increase of order. In crystallization this order is not explicitly imported across the boundary but is inherent in atomic forces, with trial and error matches being made by the random agitations of atoms and locked in by these forces if a match is found. An exhaust of waste heat to the general environment offsets the entropy decrease entailed by the crystallization. Hence, observations of local increases in order (whether from crystallization or evolution) are not in themselves violations of the second law. This is not to say that one can’t rule out evolution on other grounds such as IC (which states that the contours of stability in morphospace enabling organisms to ‘crystallize out’ in stages don’t exist) . . .
1 --> Let's go back, to Thaxton et al, TMLO ch 8, i.e. 1984 [and citing Yockey, Wickens et al from the 70's - early 80's; cf here, too, appendix 3, my always linked]:
TMLO ch 8: Peter Molton has defined life as "regions of order which use energy to maintain their organization against the disruptive force of entropy."1 In Chapter 7 it has been shown that energy and/or mass flow through a system can constrain it far from equilibrium, resulting in an increase in order. Thus, it is thermodynamically possible to develop complex living forms, assuming the energy flow through the system can somehow be effective in organizing the simple chemicals into the complex arrangements associated with life. In existing living systems, the coupling of the energy flow to the organizing "work" occurs through the metabolic motor of DNA, enzymes, etc. This is analogous to an automobile converting the chemical energy in gasoline into mechanical torque on the wheels. We can give a thermodynamic account of how life's metabolic motor works. The origin of the metabolic motor (DNA, enzymes, etc.) itself, however, is more difficult to explain thermodynamically, since a mechanism of coupling the energy flow to the organizing work is unknown for prebiological systems . . . . "a periodic structure has order. An aperiodic structure has complexity." . . . . "Nucleic acids [i.e. DNA, RNA] and protein are aperiodic polymers, and this aperiodicity is what makes them able to carry much more information." . . . . "only certain sequences of amino acids in polypeptides and bases along polynucleotide chains correspond to useful biological functions." . . . . [Orgel:] "Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity."6 [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. This seems to be the first use of the term specified complexity, i.e it is a natural emergence from OOL research in the 1970's, not a suspect innovation of the ID movement circa 1990s] . . . . Yockey7 and Wickens5 develop the same distinction, that "order" is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, "organization" refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future.
2 --> That is, the relevant distinction has been made in the relevant literature at least 24 - 35 years ago. Crystallisation [or formation of hurricanes or the like] is simply utterly distinct from code-bearing complex functional organisation that works algorithmically. So much so that the persistence of such an irrelevant argument reflects rhetorical, closed minded objectionism through the use of red herrings to lead out to handy, set up, oil-soaked strawmen that can be ignited to distract attention, and cloud and poison the atmosphere of discussion; rather than any serious issue. (And red herring, strawman and atmosphere poisoning arguments "work" in the sense of distracting attention and diverting from the merits on an issue; that is why they are so often resorted to by those who cannot answer an issue on its merits.) TVR, and others, let us not fall for such diversionary -- and too often not merely delusional but, sadly, outright deceptive -- tactics. 3] For, the question is not whether naturally occurring boundary conditions may through natural regularities [such as ionic attraction or the like] foster the emergence of order. It is whether FSCI-rich algorithmic systems can spontaneously emerge without the commonly observed causative force for such systems, intelligent action. 4 --> And, for that, we see a major obstacle, namely that once we cross the threshold of 500 - 1,000 bits of information storage, we are dealing with config spaces with over 10^300 cells, which suffices to so confine islands of functionality that a random walk based search process that begins at an arbitrary initial point, by overwhelming probability on the gamut of the observed cosmos, will start and terminate in nonfunctional states, never once traversing such an island of functionality. Thus, the force of Dembski and Marks' point that the critical success factor in searches of such config spaces is the injection of active information, i.e deriving from intelligent, purposeful, insightful, even creative agency. [That is agents DESIGN FSCI-rich algorithms and structures that physically implement them to achieve their goals.] 5 --> This also means that, relative to chance + necessity only [i.e without active information to move from maximally improbable to highly likely], competitive, funcitonality-improving hill-climbing processes will not be able to get a chance to begin the climb, as minimal functionality is a condition of being able to improve through selection processes. Climbing Mt Improbable must first begin by getting to the island where it is. 6 --> So, that local increases of order or even organisation do not violate in themselves the classical form of the 2nd law of thermodynamics is a red herring. Indeed, we see such things happening all the time, and our economy is based on it. But, when it comes to FSCI-rich systems and structures, we observe that they are invariably the product of intelligent agents when ever we can directly observe the origination process. 7 --> The relevant underlying theoretical, scientific point -- as I discussed in my always linked appendix 1 [and as I pointed to above in nos 4 and 14 - 15 -- is the statistical underpinnings of that second law, i.e, due to overwhelming statistical weight, it is maximally improbable to get to functionally specified complex information by chance + necessity; on the gamut of the observed cosmos. 8 --> My nanobots and microjets thought expt shows why. GEM of TKIkairosfocus
March 12, 2008
March
03
Mar
12
12
2008
02:08 AM
2
02
08
AM
PDT
A footnote: I here excerpt Harry Robertson's Statistical Thermophysics, as that is very relevant to the above: ++++++++ . . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . . [pp. vii - viii] . . . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability should be seen as, in part, an index of ignorance] . . . . [deriving informational entropy . . . .] S({pi}) = - C [SUM over i] pi*ln pi, [. . . "my" Eqn A.4] [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp - beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . .[pp.3 - 6] S, called the information entropy, . . . correspond[s] to the thermodynamic entropy, with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context [p. 7] . . . . Jayne's [summary rebuttal to a typical objection] is ". . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly 'objective' quantity . . . it is a function of [those variables] and does not depend on anybody's personality. There is no reason why it cannot be measured in the laboratory." . . . . [p. 36.] [Robertson, Statistical Thermophysics, Prentice Hall, 1993. (NB: Sorry for the math and the use of text for symbolism. However, it should be clear enough that Roberson first summarises how Shannon derived his informational entropy [though Robertson uses s rather than the usual H for that information theory variable, average information per symbol], then ties it to entropy in the thermodynamic sense using another relation that is tied to the Boltzmann relationship above. This context gives us a basis for looking at the issues that surface in prebiotic soup or similar models as we try to move from relatively easy to form monomers to the more energy- and information- rich, far more complex biofunctional molecules.)] ++++++++ I trust this is helpful. The key relevance of this to the design inference -- and BTW, this is not the same as the Divine inference, Tim -- is that relevant configurations of the systems of interest yield huge config spaces, with islands of relevant functionality being exceedingly isolated. So, when searches based on chance + necessity only engage in in effect random walks from initial arbitrary conditions, they most likely by overwhelming statistical weight of non functional macrostates, will never find the minimally functional shores of an island of functionality. On the gamut of the observed cosmos across its lifespan. [Cf my discussion of the nanobots and microjets in APP 1 to the always linked.] That means that hill-climbing algorithms won't get a chance to start working off competitive degrees of functionality. When it comes to crystals and snowflakes etc, we already can see that there is an ordering natural regularity which works with circumstances to create order [regularity] and perhaps chance-based variability, e.g. the dendritic hexagonal snowflakes so beloved of photographers. But, to encode functional informaiton of complexity levant to what we need for say life, we are not looking at such oirder but at complex organisation according to codes and machinery to express the codes physically. Information metrics are relevant to the capacity of channels and storage entities to carry such codes. But that is not a metric of the functionality of what is for the moment in the channel or in the storage unit. Nor will lucky noise substiturte for intelligent action, as we know fromt eh law of sampling that the vast majhority of samples of a population will reflect its typical cross section. That is, since non functional states are in the relevant contexts overwhelmingly dominant, we end up in the non-functional macrostate aqnd run out of probabilistic resources ont hegamut of the observed cosmos before we can cedibly access functionality through lucky noise. Onthe other hand intelligent agents routinely use understanding to design and implement funcitonality rich codes that go beyond 500 - 1,000 bits, i.e the reasonable observed cosmos level threshold for discoverability of even large islands of functionality by chance process. GEM of TKIkairosfocus
March 11, 2008
March
03
Mar
11
11
2008
12:15 AM
12
12
15
AM
PDT
Sorry Dave, but I hope you’ve got time for a little more explaining because, evolution or no evolution, Granville’s work isn’t sinking into to my skull: I can make little sense of it when applied to the relatively prosaic process of crystal formation. . In crystal formation a domain of high order (albeit simple) develops and the containing solution warms a little. Therefore the entropy books report a gain greater than or equal to zero because the reduction in w incurred by the crystal formation is more than compensated for by the increase in w caused by the solution warming. (Where w = number of microscopic states consistent with the macroscopic state as defined by temperature, volume, pressure etc, and where S = k log(w)). . In the context of crystal formation I find difficulty trying to apply Granville’s notion of explicit ‘information’ crossing a domain boundary bringing order to the system. Fair enough, given that Granville supports ID, I understand why he believes that direct Divine information input is required to bring about the extravagantly complex and ordered systems of life. But the organizational ‘low end’ doesn’t need to crib directly from the Divine mind. In crystal formation no books on crystallography cross into the crystal domain telling the atoms how to organize. Crystals crystallize because the physical regime providentially provides for a relatively simple morphospace containing ‘contours of stability’, ‘ratchets of formation’, or ‘Dawkins slopes’ or whatever you want to call them. In this process there are no imports of explicit information across the boundary, but there is an export of randomized kinetic energy (that is, heat) maintaining a positive overall value in the entropy increment. A similar analysis may carried out on the heat pouring in from the Sun to the Earth. The physics of the Earth system uses the low entropy of the sun/outer space temperature gradient to produce organized forms of energy, namely kinetic energy (winds) and potential energy (clouds). So pumping in heat into something can produce order at least at the organizational low end. . That the conjectured evolution of life represents a local increase in order is not in and of itself enough to rule out evolution – as we have seen crystal formation entails a local increase of order. In crystallization this order is not explicitly imported across the boundary but is inherent in atomic forces, with trial and error matches being made by the random agitations of atoms and locked in by these forces if a match is found. An exhaust of waste heat to the general environment offsets the entropy decrease entailed by the crystallization. Hence, observations of local increases in order (whether from crystallization or evolution) are not in themselves violations of the second law. This is not to say that one can’t rule out evolution on other grounds such as IC (which states that the contours of stability in morphospace enabling organisms to ‘crystallize out’ in stages don’t exist) . I’m sorry Dave but I don’t understand the relationship between subjective and objective information as you have described it. If we represent subjective information by SI and objective information by OI, are you saying that a conservation law of form d(SI) + d(OI) = 0 holds? If I have understood you correctly then this fails to make sense to me, because I can conceive circumstances where both d(SI) and d(OI) increase. Take a transmitted binary sequence: Here the macroscopic parameters are the statistical frequency profiles. If during the transmission of the signal the frequency profiles change and start to approximate closer to that produced by coin tossing, then clearly d(OI) is positive. Now, if all we know about the sequence are its statistical profiles then it is true that subjective information decreases because the known macroscopic parameters provide an envelope that is consistent with a much larger number, w, of possible sequence configurations. But if as the frequency profiles change to a more disordered regime we simultaneously start to read the sequence bit by bit then that means that d(SI) has also increased and hence d(SI) + d(OI) != 0. Please note that when reading the sequence a ‘hidden’ channel of interpretive information is not a necessary concomitant: If one is reading a random sequence, it need not necessarily have further meaning; the configuration of bits may be all one wants to know and blow any conjectured ‘interpretation’. . The assumption that subjective information entails pattern is not true. Pattern entails compressibility of knowledge, but let me repeat: subjective information doesn’t entail pattern; we may have a brain like the memory man and be capable of memorizing a book of random numbers. Therefore subjective knowledge is not what we are ultimately interested in here. What interests us here are those objective patterns called organisms whether we know about them or not. . You concede, Dave, that there may be unknown laws capable of generating organisms. This may be true, but there is also another option, although IC explicitly denies it; namely, that the morphospace implicit in the regime of physical laws we already know does have contours of stability running through it all the way up to organic forms, thus enabling these forms to ‘crystallize out’ in stages. What I am saying is that perhaps the ‘unknown laws’ which you admit could in principle be capable of generating life are already known to us! But of course this is where the real debate starts. I have to confess that although I can make some rather general and abstract statements about this issue, as I’m not a paleontologist, natural historian, or biochemist, I can’t argue very cogently about it one way or the other. But I'll try!Timothy V Reeves
March 10, 2008
March
03
Mar
10
10
2008
09:57 AM
9
09
57
AM
PDT
Part of the confusion over "order" is that in English it is used in two senses: One is "order" due to physical laws, as in a crystal. The other sense is order in the send of "ordering" a sequence from low to high integers etc. Better to use Natural law for the first and Specified Information for the second. etc. Shannon "information" is being referred to as "information entropy". It can be thought of as the capacity to carry a signal, and not the information within the signal itself. So Timothy Reeves "Subjective information" is better termed "Specified Information" and "Objective information" better called "information entropy".DLH
March 9, 2008
March
03
Mar
9
09
2008
08:43 PM
8
08
43
PM
PDT
William The information stream is maximized when it cannot be perfectly described in fewer bits than the stream contains. If it contains encrypted information which appears to be random then the description of the encryption code becomes part of the stream (no free lunch). For example, we can't mutually agree that preselected random codes in an information stream refer to encyclopedia articles without incorporating both the codebook and the articles to which they refer as part of the information stream. You're basically saying that hidden channels don't count. Hidden channels do count. DaveScot
March 8, 2008
March
03
Mar
8
08
2008
03:51 PM
3
03
51
PM
PDT
18:
Tim, exactly what “objective” messages are you receiving when you read maximum entropy random coin flips — or disordered particle positions?
May I answer? What message could be more objective than a description of the state of an object?Turner Coates
March 8, 2008
March
03
Mar
8
08
2008
03:00 PM
3
03
00
PM
PDT
Hi Dave,
A maximally loaded Shannon information channel is completely random.
I think I have to disagree here. A maximally loaded information channel only seems random in isolation because all of the order to which it refers is now kept "offshore." As I see it, the channel must be considered ordered (non-random) because it is, in every specified bit, a crucial component of a larger system (order).William Brookfield
March 8, 2008
March
03
Mar
8
08
2008
02:17 PM
2
02
17
PM
PDT
Timothy Reeves Good. I've written here on several occasions that there is a difference between subjective and objective information. They can't be interchanged as equal quantities but, here's the catch, they both can be subject to the law of entropy and the law of conservation. The flaw in the Darwinian open-system response is they equate all kinds of order as interchangeable quantities and clearly they are not. Adding heat to an open system doesn't increase carbon order. In fact it does just the opposite. Pour heat into a log and see if its carbon order increases. Like duh. Pour carbon order into a log and its carbon order increases. This is what Granville is trying to explain a million different ways and it just isn't sinking in with some people. Order can be imported across a boundary but different types of order, even though they obey the same laws, are not interchangeable. Now let's see if you understand the relationship between Shannon (objective) information and subjective (specified) information. As you wrote subjective information can be destroyed. The law of entropy assures us it will be destroyed over time. But here's the kicker, as subjective information is destroyed objective information increases. A maximally loaded Shannon information channel is completely random. If it can be compressed then it isn't carrying the maximum amount of objective information. You understand that well enough. If there is any subjective information in the channel it means there's a pattern in it and the pattern by definition makes it less than completely random. So in that sense the thermodynamic law of conservation of energy - energy is neither created nor destroyed but only changes form - holds true for information as well. Information cannot be created nor destroyed but only changes form. In this case the law of entropy changes its form from subjective to objective. To go the other way, opposite to the law of increasing entropy, requires importing subjective order across the boundary in an open system just as we must import carbon order to increase carbon order and we can't exchange thermal order for carbon order. If you've followed along so far then we come to an important question. If the only way to increase subjective order in an open system is by importing subjective order across a boundary what is the source of that imported subjective order? The only reasonable thing I can think of is it must have come from a like subject. I can put a book loaded with subjective information in front of my dog but he won't understand it. That's because he's not a subject like myself. Following that if we find subjective order in the universe and the source wasn't human then it seems like it must then at least come from a subject something like ourselves. If we were created in God's image then that satisfies the requirement for a like subject. Or maybe it's an evolved intelligence. Whatever it is it must be something like ourselves or we wouldn't be able to discern the subjective information just like my dog can't discern the information in a book. I'm not sure which prior century the Darwinians are pulling their understanding of the laws of thermodynamics from but it sure isn't the 20th or 21st centuries. Those laws were generalized a long time ago from thermal order to other types of order and in the latter 20th century were generalized to objective ( Shannon)information as well. I don't know if they've been generalized to include subjective information before but I just did and given the ease of doing it I can't imagine I'm the first. Now it may very well be that there are unknown physical laws in operation which cause intelligent life to emerge from them just like the known physical laws that produce snowflakes and other crystalline order from unordered matter but until those laws can be described they are nothing more than flights of fantasy and I will readily concede that Darwinian chance worshippers are experienced pilots in the fantasy airline. DaveScot
March 8, 2008
March
03
Mar
8
08
2008
02:28 AM
2
02
28
AM
PDT
Hi Tim, Now here’s the ironic twist. In an objective sense systems of greater entropy have greater information. Objective information is measured in the Shannon sense, H = SUM(Pi log(Pi), and this maximizes when all bits, i, are equally probable. For example, in a random sequence each bit has an equal probability of being 1 or 0. When Shannon information increases the system actually gets more disordered. Why? Because if the disordered system is thought of as a message of bits then because our macroscopic ignorance of the system is so profound when entropy is high, reading the system as a message bit by bit will return maximum information. Tim, exactly what "objective" messages are you receiving when you read maximum entropy random coin flips -- or disordered particle positions?William Brookfield
March 7, 2008
March
03
Mar
7
07
2008
05:43 PM
5
05
43
PM
PDT
Erratum: The defintion of H should read: H = - SUM pi log(pi)Timothy V Reeves
March 7, 2008
March
03
Mar
7
07
2008
04:25 PM
4
04
25
PM
PDT
I have looked at the two papers by Duncan and Sumera and also at Granville’s appendix D. I have no essential disagreements with analyses of these papers, but I do have problems with their interpretations. . When an isolated system increases its entropy we know less about it because its exact microscopic atomic configuration occupies one of an enormous number of possible configurations, (represented by the w in S = k log(w)), consistent with its macroscopic state. Therefore subjectively speaking we know less about the system because its microscopic state could be any one of a vast number of possibilities. For example, a binary sequence that returns frequency profiles similar to that returned by the throwing of a coin has so many ways of realizing these frequency profiles that knowledge of these profiles says next nothing about the exact bit configuration of the sequence itself. So in a subjective sense, repeat, in a subjective sense, increasing entropy entails a loss of information. . Now here’s the ironic twist. In an objective sense systems of greater entropy have greater information. Objective information is measured in the Shannon sense, H = SUM(Pi log(Pi), and this maximizes when all bits, i, are equally probable. For example, in a random sequence each bit has an equal probability of being 1 or 0. When Shannon information increases the system actually gets more disordered. Why? Because if the disordered system is thought of as a message of bits then because our macroscopic ignorance of the system is so profound when entropy is high, reading the system as a message bit by bit will return maximum information. . So, when a system becomes more disordered in the subjective sense information decreases because its macroscopic parameters (pressure, temperature, volume, frequency profiles or what have you) reveal less about its exact microscopic state. And yet in objective sense the information content of the microscopic state has increased. . Duncan and Sumera appear not to have made this distinction between subjective and objective information. Therefore I find their characterization of the second law in terms of the erasure of information unsatisfactory. . During crystallization in a solution a pocket of very high order develops in exchange for a compensating entropy increase in the system by way of an increase in temperature. In this simple example we have a case where very ordered (albeit simple) structures are being constructed in one part of the system at the expense of a decrease in order elsewhere. Consequently, Granville’s intuitive discomfort with the general idea of degradation in one room allowing assembly of high order in another, is not justified at least at this rather elementary level of crystallization. . By the way, it is worth noting that crystals are far more ordered than organisms simply because there are far more ways of being an organism than a crystal. Organisms inhabit the space between high order and low order, (called ‘complexity’), although, of course, given the vast possibilities of morphopspace organisms are still an extremely unrepresentative way of being matter and therefore constitute very high order. Now, its very tempting to draw an analogy between the ratchet of crystal formation and the ‘crystallization’ of organisms on earth via the ratchet of evolution accompanied by an entropy compensating warming of the surroundings. Needless to say, you are going to tell me that organisms with their ramifying complexity are a whole new ball game. . Yes, that’s true, but this is where ‘computation time’ comes into play. At one extreme we have crystals whose simple structures will have a short computation time - they have very little variety and therefore there is not much information (in the Shannon sense) in them; hence the random shufflings of the information rich solution take relatively little time to deliver atoms in the right places. At the other extreme let us take a single microscopic configuration of solution atoms that has maximum disorder. Because a disordered configuration contains so much information (in the Shannon sense) it will take more than quite a few universe life times before it actually appears amongst the random shufflings of atoms. On the other hand organism are an intermediate – in fact a lot nearer the ordered end than the disordered, but compared to crystals they have an enormous information content and so the random shufflings of atoms have a lot more Shannon information to deliver up, and if they are going to deliver the right atoms in the right place at the right time, this is going to consume a lot more time than crystallization. . Now, I’m NOT, repeat NOT, saying ‘therefore evolution has taken place’. I’m saying that IF evolution has taken place as a kind of very complex ‘locking in place’ of material components (whose low end limit is found in crystallization and whose upper end limit is found in the appearance of a single designated disordered configuration), then the expectation time of evolution is going to be intermediate between the reification of crystals and the reification of a particular highly disordered microscopic arrangement of atoms. To work, of course, evolution, like crystallization, requires a ratchet mechanism, a mechanism presumably bestowed by the physical regime. . What I AM saying, however, is don’t use the second law to try and scupper evolution because the second law does not in principle prevent evolution. Best stick to the arguments about irreducible complexity which deny the existence of that conjectured ratchet mechanism for evolution. Unlike the second law that yields to analysis, the analytical intractability surrounding IC makes it far more indomitable.Timothy V Reeves
March 7, 2008
March
03
Mar
7
07
2008
03:39 PM
3
03
39
PM
PDT
PS to Larry and Turner: The issues over thermodynamics are not at the classical macro-level but arise once one looks at individual microstates and associated specific configurations of energy and mass that give rise to the more directly observable macrostates. On that, in effect we can define a config space relating tot he possible ways the relevant matter can be arranged, normally quite large. Then, we start from an arbitrary state and do a random walk. Can we credibly get to the shorelines of an island of biologically relevant function, or if we happen to have been in such an island, can we hop to another? So far as I can see, once we are looking at a gap involving an information storing capacity in excess of 500 - 1,000 bits to get to a cluster of functionally specified complex information from an arbitrary initial start point, we will by overwhelming improbability exhaust the probabilistic resources of the observed cosmos before we can make the new shore of functionality. That directly relates to the problem of getting to life [e.g. 300 - 500 k DNA base pairs for minimal life] from prebiotic chemistry, and it relates to the sort of body-plan level biodiversity seen in the Cambrian revolution for instance. For example we need to go from a few millions of base pairs to say the 100 mn typical of an arthropod. The underlying issues are discussed through my nanobots and microjets thought experiment in that always linked, appendix a. GEM of TKIkairosfocus
March 5, 2008
March
03
Mar
5
05
2008
06:07 AM
6
06
07
AM
PDT
Ah, folks Thanks for the kind words. I have been busy with an energy policy for my adoptive homeland [now ready -- more or less -- for public discussion], and have been facing a real odd net access breakdown that was finally traced to a port not being set up right when the local phone co upgraded DSL speeds - took about 3 weeks altogether to figure out. Thermodynamics is of course the context of energy discussions. And 2 LOT as I pointed out has an informational aspect that should be taken seriously, especially when we take the microscopic, statistical look. I have also now read the two papers, and find them interesting. Sufficiently so that they have now joined the links in that always linked page. (Oh yes, it has been updated to encompass my recent discussion with Prof PO from last summer as continued offline.) We also do indeed need to beware of confusing definitions of information. For instance, one common def'n should really be more like info-carrying capacity. The relevant informational loss issue is that functionally specified, complex information embedded in configurations that are relevant to ID issues, is not credibly originated by chance processes on the gamut of the observable cosmos. Indeed, due to the overwhelming statistical weight of non-functional states, information is reliably lost in random, spontaneous processes. On this, Shapiro was acid but apt in his recent SCi Am article, on common OOL scenarios:
The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck.
Orgel in his even more recent post-humous article, adds:
Why should one believe that an ensemble of minerals that are capable of catalyzing each of the many steps of [for instance] the reverse citric acid cycle was present anywhere on the primitive Earth [8], or that the cycle mysteriously organized itself topographically on a metal sulfide surface [6]? The lack of a supporting background in chemistry is even more evident in proposals that metabolic cycles can evolve to “life-like” complexity. The most serious challenge to proponents of metabolic cycle theories—the problems presented by the lack of specificity of most nonenzymatic catalysts—has, in general, not been appreciated. If it has, it has been ignored. Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own . . . . The prebiotic syntheses that have been investigated experimentally almost always lead to the formation of complex mixtures. Proposed polymer replication schemes are unlikely to succeed except with reasonably pure input monomers. No solution of the origin-of-life problem will be possible until the gap between the two kinds of chemistry is closed. Simplification of product mixtures through the self-organization of organic reaction sequences, whether cyclic or not, would help enormously, as would the discovery of very simple replicating polymers. However, solutions offered by supporters of geneticist or metabolist scenarios that are dependent on “if pigs could fly” hypothetical chemistry are unlikely to help.
Okay, have fun all . . . GEM of TKIkairosfocus
March 5, 2008
March
03
Mar
5
05
2008
05:51 AM
5
05
51
AM
PDT
One must be careful not to mix and match formal definitions of information. While "information about" past states is generally lost in state transitions of a macroscopic system -- i.e., transitions are irreversible -- the information in the system increases in the sense that the probability distribution on states approaches the uniform as the system approaches thermal equilibrium. While information of one sort is lost, another increases.Turner Coates
March 4, 2008
March
03
Mar
4
04
2008
07:51 PM
7
07
51
PM
PDT
Could it be that Professor Sewell has failed to model hysteresis? Observe that the genetic pools of species function essentially as memory. Memory is more-or-less preserved with work. Because the sun shines on the open system we call the earth, there is a constant source of energy for conversion into work. One might say that species remember how to work to preserve memory. Very few errors occur in transmission of genomic memory from generation tot generation. Errors that engender effective means of propagating genomic memory are sometimes fixed in genomic memory. Ratchets, in the sense of informational physics, account for evolutionary adaptation.Turner Coates
March 4, 2008
March
03
Mar
4
04
2008
01:48 PM
1
01
48
PM
PDT
The 2nd Law of Thermodynamics is often stated in ways that have nothing to do with biology, e.g., Kelvin statement: It is impossible to construct an engine, operating in a cycle, whose sole effect is receiving heat from a single reservoir and the performance of an equivalent amount of work. Clausius statement: It is impossible to carry out a cyclic process using an engine connected to two heat reservoirs that will have as its only effect the transfer of a quantity of heat from the low-temperature reservoir to the high-temperature reservoir Also, the classic 2nd Law of Thermodynamics usually concerns the macroscopic average properties of homogeneous substances. I think that a good illustration of the effect of the 2nd Law of Thermodynamics is a closed system with two finite reservoirs at different temperatures plus an engine -- say, a Carnot engine -- that performs work by operating in a cycle in which heat is received from the hot reservoir in one stage of the cycle and heat is transferred to the cold reservoir in another stage. As the work is performed, the hot reservoir becomes cooler and the cold reservoir becomes warmer, and as a result of these temperature changes the engine becomes increasingly less efficient (in a Carnot engine with an ideal gas as the working substance, the efficiency is defined as the ratio of (1) the temperature difference of the reservoirs to (2) the absolute temperature of the hot reservoir). Eventually a point is reached where the temperature difference between the two reservoirs is so small that practically no work can be performed at all. However, according to the First Law of Thermodynamics, the total internal energy of the closed system is the same as it was at the beginning. What has changed is that this energy is no longer capable of performing work inside the system because that energy is now uniformly scattered in the form of a uniform temperature throughout the system whereas a difference in reservoir temperatures is required to perform work. The system has changed from an ordered system -- where the higher-energy gas particles in the hotter reservoir are separated from the lower-energy gas particles in the colder reservoir -- to a disordered system where the gas-particle energy is uniformly distributed throughout the system. This increase in disorder is represented by an increase in the total entropy of the system. More discussion is on my blog at -- http://im-from-missouri.blogspot.com/2007/02/2nd-law-of-thermodynamics-and.htmlLarry Fafarman
March 4, 2008
March
03
Mar
4
04
2008
02:18 AM
2
02
18
AM
PDT
I was hoping you would weigh in on that- https://uncommondescent.com/intelligent-design/leibniz-machines-of-nature-all-artificial-automata/Frost122585
March 4, 2008
March
03
Mar
4
04
2008
01:06 AM
1
01
06
AM
PDT
kariosfocus, I havent seen any of your posts in a while... where have you been, cause you missed a good conversation that I started on Leibniz and nature vs man made design.Frost122585
March 4, 2008
March
03
Mar
4
04
2008
01:02 AM
1
01
02
AM
PDT
"Conservation of Information" One aspect of Duncan and Semura’s papers that is important to Intelligent Design relates to “conservation of information”. Early on science and engineering students learn about the first law of “conservation of energy”. Now there are also well over 100 “recent articles” and over 500 total hits for “conservation of information” listed by Google Scholar, and some 14,400 by Google, 27,700 by Yahoo. e.g. Dembski & Marks "Conservation of Information in Search: Measuring the Cost of Success" However, in their first paper The Deep Physics Behind the Second Law: Information and Energy As Independent Forms of Bookkeeping Duncan and Semura state:
We suggest that the second law as we observe it stems from the fact that (classical) information about the state of a system is fundamentally erased by many processes, and once truly lost, that information cannot be reliably recovered.
May I recommend caution or clarification in using the phrase “conservation of information” to distinguish between “conservation” as: 1) an upper bound constraint on new information versus: 2) a lower bound preservation of information. I submit that “2) preservation of information” only holds in systems with sufficient redundancy and reproduction, analysis, and/or error recovery systems capable of recovering that portion of information that was deleted. I propose distinguishing these two different aspects by the terms: "Constraint of information" and "Preservation of information"DLH
March 3, 2008
March
03
Mar
3
03
2008
06:58 PM
6
06
58
PM
PDT
kairosfocus: Always a pleasure to hear from you. I missed you!gpuccio
March 3, 2008
March
03
Mar
3
03
2008
04:32 PM
4
04
32
PM
PDT
This looks really interesting. Hope to have time to get into it in depth!Timothy V Reeves
March 3, 2008
March
03
Mar
3
03
2008
03:19 PM
3
03
19
PM
PDT
Thus speaks a master! Thanks for the reminder to your insightful writings. I look forward to your considered comments.DLH
March 3, 2008
March
03
Mar
3
03
2008
02:41 PM
2
02
41
PM
PDT
DLH & Prof Sewell: Good enough to make me unlurk again. (I've been busy writing an energy policy.) Two papers for my vaults. [Cf my discussion here in the always linked, esp. the excerpts from Harry Robertson of the informational school of thermodynamics, and of course Leon Brillouin.) A key component of my own view has been aptly summed up in an excerpt from the former:
. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I [HR] shall distinguish heat from work, and thermal energy from other forms . . . [pp. vii - viii] . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . . [deriving informational entropy, cf. discussions here, here, here, here and here; also Sarfati's discussion of debates and the issue of open systems here . . . ] H({pi}) = - C [SUM over i] pi*ln pi, [. . . "my" Eqn 6] [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp - beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . . [H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . . Jayne's [summary rebuttal to a typical objection] is ". . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly 'objective' quantity . . . it is a function of [those variables] and does not depend on anybody's personality. There is no reason why it cannot be measured in the laboratory." . . . . [pp. 3 - 6, 7, 36; replacing Robertson's use of S for Informational Entropy with the more standard H.]
I intend to have some enjoyable thermodynamics reading, thanks again DLH! GEM of TKIkairosfocus
March 3, 2008
March
03
Mar
3
03
2008
02:00 PM
2
02
00
PM
PDT
Reviewing all challenges to the 2nd Law, Cápek and Peter strongly endorse it:
. . .while the second law might be potentially violable, it has not been violated in practice. This being the case, it is our position that the second law should be considered absolute unless experiment demonstrates otherwise.
Challenges to the Second Law of Thermodynamics Vladislav Cápek, Daniel Peter, Sheehan 2005, Springer Thermodynamics, 347 pages ISBN 1402030150 Their book compiles all the serious challenges to the 2nd Law.
The second law of thermodynamics is considered one of the central laws of science, engineering and technology. For over a century it has been assumed to be inviolable by the scientific community. Over the last 10-20 years, however, more than two dozen challenges to it have appeared in the physical literature - more than during any other period in its 150-year history. The number and variety of these represent a cogent threat to its absolute status. This is the first book to document and critique these modern challenges. Written by two leading exponents of this rapidly emerging field, it covers the theoretical and experimental aspects of principal challenges. In addition, unresolved foundational issues concerning entropy and the second law are explored. This book should be of interest to anyone whose work or research is touched by the second law.
DLH
March 3, 2008
March
03
Mar
3
03
2008
11:58 AM
11
11
58
AM
PDT
DLH, Thanks for the post, that is great. A major source of confusion w.r.t. the second law is that, unlike most fundamental laws of science, there are many different formulations of the second law out there, some much more general than others. Many physics texts apply it to all sorts of things (breaking of glass, demolition of a building) that are related to information loss, but have no direct connection to thermodynamics, but as soon as you apply it to evolution, suddenly it only applies to thermodynamics. But the underlying principle behind all applications is that the laws of probability at the microscopic level drive the macroscopic processes, so that IS the second law, as far as I am concerned. By the way, I have previously noted the similarities of my second law arguments with Dembski's specified complexity arguments (see footnote here ).Granville Sewell
March 3, 2008
March
03
Mar
3
03
2008
11:42 AM
11
11
42
AM
PDT
You see, thermodynamics is fine for pointless physics class but when you apply anything to origins it better support Darwin’s theory of evolution or else! I bet that there are people out there trying to disprove the theory as we speak- motivated simply by its evolutionary consequences and hence its secondary metaphysical and theological implications.
"On the surface, Darwin's theory of evolution is seductively simple and, unlike many other theories can be summarized succinctly with no math... In order to do so in the real world, rather than just in our imaginations, there must be a biological rout to the structure that stands a reasonable chance of success in nature. In other words, variation, selection, and inheritance will only work if there is also a smooth evolutionary "pathway" leading from biological point A to biological point B. The question of the pathway is as critical in evolution as it is in everyday life." Michael J. Behe,- Edge of Evolution
As we see in that quote the math that goes into the evolutionary process is heavily important regarding the feasibility and character of the evolutionary scheme. What can be said is that the process is not random. Darwin was wrong. What is up to our faith is if this process has a discernable purpose and if so what it is.Frost122585
March 3, 2008
March
03
Mar
3
03
2008
11:15 AM
11
11
15
AM
PDT
1 2

Leave a Reply