Informatics Laws Science

“No process can result in a net gain of information” underlies 2LoT

Spread the love

Further to Granville Sewell‘s work on the 2nd Law in an open system, here is Duncan & Semura profound insight into how loss of information is the foundation for the 2nd Law of Thermodynamics. This appears foundational to the understanding and development and testing of origin theories and consequent change in physical and biotic systems. ———————

The key insight here is that when one attempts to derive the second law without any reference to information, a step which can be described as information loss always makes its way into the derivation by some sleight of hand. An information-losing approximation is necessary, and adds essentially new physics into the model which is outside the realm of energy dynamics.


4. Summary of the Perspective

1) Energy and information dynamics are independent but coupled (see Figure 1).
2) The second law of thermodynamic is not reducible purely to mechanics (classical or quantum); it is part of information dynamics. That is, the second law exists because there is a restriction applying to information that is outside of and additional to the laws of classical or quantum mechanics.
3) The foundational principle underlying the second law can then be expressed succinctly in terms of information loss:
“No process can result in a net gain of information.”
In other words, the uncertainty about the detailed state of a system cannot decrease over time – uncertainty increases or stays the same.

The information loss perspective provides a natural framework for incorporating extensions and apparent challenges to the second law. The principle that “no process can result in a net gain of information” appears to be deeper and more universal than standard formulations of the second law.

. . . the information-loss framework offers the possibility of discovering new mechanisms of information storage through the analysis of second law challenges, deepening our understanding both of the second law and of information dynamics.

See full paper: Information Loss as a Foundational Principle for the Second Law of Thermodynamics, T. L. Duncan, J. S. Semura
Foundations of Physics, Foundations of Physics, Volume 37, Issue 12, pp.1767-1773, DOI 10.1007/s10701-007-9159-z
This builds on Duncan & Semura’s first paper:
The Deep Physics Behind the Second Law: Information and Energy As Independent Forms of Bookkeeping, T. Duncan, J. Semura, Entropy 2004, 6, 21-29, arXiv:cond-mat/0501014v1

60 Replies to ““No process can result in a net gain of information” underlies 2LoT

  1. 1
    Frost122585 says:

    You see, thermodynamics is fine for pointless physics class but when you apply anything to origins it better support Darwin’s theory of evolution or else! I bet that there are people out there trying to disprove the theory as we speak- motivated simply by its evolutionary consequences and hence its secondary metaphysical and theological implications.

    “On the surface, Darwin’s theory of evolution is seductively simple and, unlike many other theories can be summarized succinctly with no math… In order to do so in the real world, rather than just in our imaginations, there must be a biological rout to the structure that stands a reasonable chance of success in nature. In other words, variation, selection, and inheritance will only work if there is also a smooth evolutionary “pathway” leading from biological point A to biological point B. The question of the pathway is as critical in evolution as it is in everyday life.”

    Michael J. Behe,- Edge of Evolution

    As we see in that quote the math that goes into the evolutionary process is heavily important regarding the feasibility and character of the evolutionary scheme. What can be said is that the process is not random. Darwin was wrong. What is up to our faith is if this process has a discernable purpose and if so what it is.

  2. 2
    Granville Sewell says:

    DLH,
    Thanks for the post, that is great. A major source of confusion w.r.t. the second law is that, unlike most fundamental laws of science, there are many different formulations of the second law out there, some much more general than others. Many physics texts apply it to all sorts of things (breaking of glass, demolition of a building) that are related to information loss, but have no direct connection to thermodynamics, but as soon as you apply it to evolution, suddenly it only applies to thermodynamics. But the underlying principle behind all applications is that the laws of probability at the microscopic level drive the macroscopic processes, so that IS the second law, as far as I am concerned.

    By the way, I have previously noted the similarities of my second law arguments with Dembski’s specified complexity arguments (see footnote
    here ).

  3. 3
    DLH says:

    Reviewing all challenges to the 2nd Law, Cápek and Peter strongly endorse it:

    . . .while the second law might be potentially violable, it has not been violated in practice. This being the case, it is our position that the second law should be considered absolute unless experiment demonstrates otherwise.

    Challenges to the Second Law of Thermodynamics
    Vladislav Cápek, Daniel Peter, Sheehan 2005, Springer Thermodynamics, 347 pages ISBN 1402030150

    Their book compiles all the serious challenges to the 2nd Law.

    The second law of thermodynamics is considered one of the central laws of science, engineering and technology. For over a century it has been assumed to be inviolable by the scientific community. Over the last 10-20 years, however, more than two dozen challenges to it have appeared in the physical literature – more than during any other period in its 150-year history. The number and variety of these represent a cogent threat to its absolute status. This is the first book to document and critique these modern challenges. Written by two leading exponents of this rapidly emerging field, it covers the theoretical and experimental aspects of principal challenges. In addition, unresolved foundational issues concerning entropy and the second law are explored. This book should be of interest to anyone whose work or research is touched by the second law.

  4. 4
    kairosfocus says:

    DLH & Prof Sewell:

    Good enough to make me unlurk again. (I’ve been busy writing an energy policy.)

    Two papers for my vaults. [Cf my discussion here in the always linked, esp. the excerpts from Harry Robertson of the informational school of thermodynamics, and of course Leon Brillouin.)

    A key component of my own view has been aptly summed up in an excerpt from the former:

    . . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I [HR] shall distinguish heat from work, and thermal energy from other forms . . . [pp. vii – viii]

    . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . .

    [deriving informational entropy, cf. discussions here, here, here, here and here; also Sarfati’s discussion of debates and the issue of open systems here . . . ]

    H({pi}) = – C [SUM over i] pi*ln pi, [. . . “my” Eqn 6]

    [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp – beta*yi) = Z [Z being in effect the partition function across microstates, the “Holy Grail” of statistical thermodynamics]. . . .

    [H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . .

    Jayne’s [summary rebuttal to a typical objection] is “. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.” . . . . [pp. 3 – 6, 7, 36; replacing Robertson’s use of S for Informational Entropy with the more standard H.]

    I intend to have some enjoyable thermodynamics reading, thanks again DLH!
    GEM of TKI

  5. 5
    DLH says:

    Thus speaks a master! Thanks for the reminder to your insightful writings.
    I look forward to your considered comments.

  6. 6
    Timothy V Reeves says:

    This looks really interesting. Hope to have time to get into it in depth!

  7. 7
    gpuccio says:

    kairosfocus:

    Always a pleasure to hear from you. I missed you!

  8. 8
    DLH says:

    “Conservation of Information”
    One aspect of Duncan and Semura’s papers that is important to Intelligent Design relates to “conservation of information”.

    Early on science and engineering students learn about the first law of “conservation of energy”. Now there are also well over 100 “recent articles” and over 500 total hits for “conservation of information” listed by Google Scholar, and some 14,400 by Google, 27,700 by Yahoo. e.g. Dembski & Marks “Conservation of Information in Search: Measuring the Cost of Success”

    However, in their first paper The Deep Physics Behind the Second Law: Information and
    Energy As Independent Forms of Bookkeeping
    Duncan and Semura state:

    We suggest that the second law as we observe it stems from the fact that
    (classical) information about the state of a system is fundamentally erased by many processes, and once truly lost, that information cannot be reliably recovered.

    May I recommend caution or clarification in using the phrase “conservation of information” to distinguish between “conservation” as:
    1) an upper bound constraint on new information
    versus:
    2) a lower bound preservation of information.

    I submit that “2) preservation of information” only holds in systems with sufficient redundancy and reproduction, analysis, and/or error recovery systems capable of recovering that portion of information that was deleted.

    I propose distinguishing these two different aspects by the terms:
    “Constraint of information” and
    “Preservation of information”

  9. 9
    Frost122585 says:

    kariosfocus, I havent seen any of your posts in a while… where have you been, cause you missed a good conversation that I started on Leibniz and nature vs man made design.

  10. 10
    Frost122585 says:

    I was hoping you would weigh in on that-

    http://www.uncommondescent.com.....-automata/

  11. 11
    Larry Fafarman says:

    The 2nd Law of Thermodynamics is often stated in ways that have nothing to do with biology, e.g.,

    Kelvin statement: It is impossible to construct an engine, operating in a cycle, whose sole effect is receiving heat from a single reservoir and the performance of an equivalent amount of work.

    Clausius statement: It is impossible to carry out a cyclic process using an engine connected to two heat reservoirs that will have as its only effect the transfer of a quantity of heat from the low-temperature reservoir to the high-temperature reservoir

    Also, the classic 2nd Law of Thermodynamics usually concerns the macroscopic average properties of homogeneous substances.

    I think that a good illustration of the effect of the 2nd Law of Thermodynamics is a closed system with two finite reservoirs at different temperatures plus an engine — say, a Carnot engine — that performs work by operating in a cycle in which heat is received from the hot reservoir in one stage of the cycle and heat is transferred to the cold reservoir in another stage. As the work is performed, the hot reservoir becomes cooler and the cold reservoir becomes warmer, and as a result of these temperature changes the engine becomes increasingly less efficient (in a Carnot engine with an ideal gas as the working substance, the efficiency is defined as the ratio of (1) the temperature difference of the reservoirs to (2) the absolute temperature of the hot reservoir). Eventually a point is reached where the temperature difference between the two reservoirs is so small that practically no work can be performed at all. However, according to the First Law of Thermodynamics, the total internal energy of the closed system is the same as it was at the beginning. What has changed is that this energy is no longer capable of performing work inside the system because that energy is now uniformly scattered in the form of a uniform temperature throughout the system whereas a difference in reservoir temperatures is required to perform work. The system has changed from an ordered system — where the higher-energy gas particles in the hotter reservoir are separated from the lower-energy gas particles in the colder reservoir — to a disordered system where the gas-particle energy is uniformly distributed throughout the system. This increase in disorder is represented by an increase in the total entropy of the system.

    More discussion is on my blog at —
    http://im-from-missouri.blogsp.....s-and.html

  12. 12
    Turner Coates says:

    Could it be that Professor Sewell has failed to model hysteresis? Observe that the genetic pools of species function essentially as memory. Memory is more-or-less preserved with work. Because the sun shines on the open system we call the earth, there is a constant source of energy for conversion into work. One might say that species remember how to work to preserve memory. Very few errors occur in transmission of genomic memory from generation tot generation. Errors that engender effective means of propagating genomic memory are sometimes fixed in genomic memory. Ratchets, in the sense of informational physics, account for evolutionary adaptation.

  13. 13
    Turner Coates says:

    One must be careful not to mix and match formal definitions of information.

    While “information about” past states is generally lost in state transitions of a macroscopic system — i.e., transitions are irreversible — the information in the system increases in the sense that the probability distribution on states approaches the uniform as the system approaches thermal equilibrium.

    While information of one sort is lost, another increases.

  14. 14
    kairosfocus says:

    Ah, folks

    Thanks for the kind words.

    I have been busy with an energy policy for my adoptive homeland [now ready — more or less — for public discussion], and have been facing a real odd net access breakdown that was finally traced to a port not being set up right when the local phone co upgraded DSL speeds – took about 3 weeks altogether to figure out.

    Thermodynamics is of course the context of energy discussions. And 2 LOT as I pointed out has an informational aspect that should be taken seriously, especially when we take the microscopic, statistical look.

    I have also now read the two papers, and find them interesting.

    Sufficiently so that they have now joined the links in that always linked page. (Oh yes, it has been updated to encompass my recent discussion with Prof PO from last summer as continued offline.)

    We also do indeed need to beware of confusing definitions of information. For instance, one common def’n should really be more like info-carrying capacity.

    The relevant informational loss issue is that functionally specified, complex information embedded in configurations that are relevant to ID issues, is not credibly originated by chance processes on the gamut of the observable cosmos. Indeed, due to the overwhelming statistical weight of non-functional states, information is reliably lost in random, spontaneous processes.

    On this, Shapiro was acid but apt in his recent SCi Am article, on common OOL scenarios:

    The analogy that comes to mind is that of a golfer, who having played a golf ball through an 18-hole course, then assumed that the ball could also play itself around the course in his absence. He had demonstrated the possibility of the event; it was only necessary to presume that some combination of natural forces (earthquakes, winds, tornadoes and floods, for example) could produce the same result, given enough time. No physical law need be broken for spontaneous RNA formation to happen, but the chances against it are so immense, that the suggestion implies that the non-living world had an innate desire to generate RNA. The majority of origin-of-life scientists who still support the RNA-first theory either accept this concept (implicitly, if not explicitly) or feel that the immensely unfavorable odds were simply overcome by good luck.

    Orgel in his even more recent post-humous article, adds:

    Why should one believe that an ensemble of minerals that are capable of catalyzing each of the many steps of [for instance] the reverse citric acid cycle was present anywhere on the primitive Earth [8], or that the cycle mysteriously organized itself topographically on a metal sulfide surface [6]? The lack of a supporting background in chemistry is even more evident in proposals that metabolic cycles can evolve to “life-like” complexity. The most serious challenge to proponents of metabolic cycle theories—the problems presented by the lack of specificity of most nonenzymatic catalysts—has, in general, not been appreciated. If it has, it has been ignored. Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own . . . .

    The prebiotic syntheses that have been investigated experimentally almost always lead to the formation of complex mixtures. Proposed polymer replication schemes are unlikely to succeed except with reasonably pure input monomers. No solution of the origin-of-life problem will be possible until the gap between the two kinds of chemistry is closed. Simplification of product mixtures through the self-organization of organic reaction sequences, whether cyclic or not, would help enormously, as would the discovery of very simple replicating polymers. However, solutions offered by supporters of geneticist or metabolist scenarios that are dependent on “if pigs could fly” hypothetical chemistry are unlikely to help.

    Okay, have fun all . . .

    GEM of TKI

  15. 15
    kairosfocus says:

    PS to Larry and Turner:

    The issues over thermodynamics are not at the classical macro-level but arise once one looks at individual microstates and associated specific configurations of energy and mass that give rise to the more directly observable macrostates.

    On that, in effect we can define a config space relating tot he possible ways the relevant matter can be arranged, normally quite large. Then, we start from an arbitrary state and do a random walk. Can we credibly get to the shorelines of an island of biologically relevant function, or if we happen to have been in such an island, can we hop to another?

    So far as I can see, once we are looking at a gap involving an information storing capacity in excess of 500 – 1,000 bits to get to a cluster of functionally specified complex information from an arbitrary initial start point, we will by overwhelming improbability exhaust the probabilistic resources of the observed cosmos before we can make the new shore of functionality. That directly relates to the problem of getting to life [e.g. 300 – 500 k DNA base pairs for minimal life] from prebiotic chemistry, and it relates to the sort of body-plan level biodiversity seen in the Cambrian revolution for instance. For example we need to go from a few millions of base pairs to say the 100 mn typical of an arthropod.

    The underlying issues are discussed through my nanobots and microjets thought experiment in that always linked, appendix a.

    GEM of TKI

  16. 16
    Timothy V Reeves says:

    I have looked at the two papers by Duncan and Sumera and also at Granville’s appendix D. I have no essential disagreements with analyses of these papers, but I do have problems with their interpretations.

    .
    When an isolated system increases its entropy we know less about it because its exact microscopic atomic configuration occupies one of an enormous number of possible configurations, (represented by the w in S = k log(w)), consistent with its macroscopic state. Therefore subjectively speaking we know less about the system because its microscopic state could be any one of a vast number of possibilities. For example, a binary sequence that returns frequency profiles similar to that returned by the throwing of a coin has so many ways of realizing these frequency profiles that knowledge of these profiles says next nothing about the exact bit configuration of the sequence itself. So in a subjective sense, repeat, in a subjective sense, increasing entropy entails a loss of information.
    .
    Now here’s the ironic twist. In an objective sense systems of greater entropy have greater information. Objective information is measured in the Shannon sense, H = SUM(Pi log(Pi), and this maximizes when all bits, i, are equally probable. For example, in a random sequence each bit has an equal probability of being 1 or 0. When Shannon information increases the system actually gets more disordered. Why? Because if the disordered system is thought of as a message of bits then because our macroscopic ignorance of the system is so profound when entropy is high, reading the system as a message bit by bit will return maximum information.
    .
    So, when a system becomes more disordered in the subjective sense information decreases because its macroscopic parameters (pressure, temperature, volume, frequency profiles or what have you) reveal less about its exact microscopic state. And yet in objective sense the information content of the microscopic state has increased.
    .
    Duncan and Sumera appear not to have made this distinction between subjective and objective information. Therefore I find their characterization of the second law in terms of the erasure of information unsatisfactory.
    .
    During crystallization in a solution a pocket of very high order develops in exchange for a compensating entropy increase in the system by way of an increase in temperature. In this simple example we have a case where very ordered (albeit simple) structures are being constructed in one part of the system at the expense of a decrease in order elsewhere. Consequently, Granville’s intuitive discomfort with the general idea of degradation in one room allowing assembly of high order in another, is not justified at least at this rather elementary level of crystallization.
    .
    By the way, it is worth noting that crystals are far more ordered than organisms simply because there are far more ways of being an organism than a crystal. Organisms inhabit the space between high order and low order, (called ‘complexity’), although, of course, given the vast possibilities of morphopspace organisms are still an extremely unrepresentative way of being matter and therefore constitute very high order. Now, its very tempting to draw an analogy between the ratchet of crystal formation and the ‘crystallization’ of organisms on earth via the ratchet of evolution accompanied by an entropy compensating warming of the surroundings. Needless to say, you are going to tell me that organisms with their ramifying complexity are a whole new ball game.
    .
    Yes, that’s true, but this is where ‘computation time’ comes into play. At one extreme we have crystals whose simple structures will have a short computation time – they have very little variety and therefore there is not much information (in the Shannon sense) in them; hence the random shufflings of the information rich solution take relatively little time to deliver atoms in the right places. At the other extreme let us take a single microscopic configuration of solution atoms that has maximum disorder. Because a disordered configuration contains so much information (in the Shannon sense) it will take more than quite a few universe life times before it actually appears amongst the random shufflings of atoms. On the other hand organism are an intermediate – in fact a lot nearer the ordered end than the disordered, but compared to crystals they have an enormous information content and so the random shufflings of atoms have a lot more Shannon information to deliver up, and if they are going to deliver the right atoms in the right place at the right time, this is going to consume a lot more time than crystallization.
    .
    Now, I’m NOT, repeat NOT, saying ‘therefore evolution has taken place’. I’m saying that IF evolution has taken place as a kind of very complex ‘locking in place’ of material components (whose low end limit is found in crystallization and whose upper end limit is found in the appearance of a single designated disordered configuration), then the expectation time of evolution is going to be intermediate between the reification of crystals and the reification of a particular highly disordered microscopic arrangement of atoms. To work, of course, evolution, like crystallization, requires a ratchet mechanism, a mechanism presumably bestowed by the physical regime.
    .
    What I AM saying, however, is don’t use the second law to try and scupper evolution because the second law does not in principle prevent evolution. Best stick to the arguments about irreducible complexity which deny the existence of that conjectured ratchet mechanism for evolution. Unlike the second law that yields to analysis, the analytical intractability surrounding IC makes it far more indomitable.

  17. 17
    Timothy V Reeves says:

    Erratum: The defintion of H should read:

    H = – SUM pi log(pi)

  18. 18

    Hi Tim,

    Now here’s the ironic twist. In an objective sense systems of greater entropy have greater information. Objective information is measured in the Shannon sense, H = SUM(Pi log(Pi), and this maximizes when all bits, i, are equally probable. For example, in a random sequence each bit has an equal probability of being 1 or 0. When Shannon information increases the system actually gets more disordered. Why? Because if the disordered system is thought of as a message of bits then because our macroscopic ignorance of the system is so profound when entropy is high, reading the system as a message bit by bit will return maximum information.

    Tim, exactly what “objective” messages are you receiving when you read maximum entropy random coin flips — or disordered particle positions?

  19. 19
    DaveScot says:

    Timothy Reeves

    Good. I’ve written here on several occasions that there is a difference between subjective and objective information. They can’t be interchanged as equal quantities but, here’s the catch, they both can be subject to the law of entropy and the law of conservation. The flaw in the Darwinian open-system response is they equate all kinds of order as interchangeable quantities and clearly they are not. Adding heat to an open system doesn’t increase carbon order. In fact it does just the opposite. Pour heat into a log and see if its carbon order increases. Like duh. Pour carbon order into a log and its carbon order increases. This is what Granville is trying to explain a million different ways and it just isn’t sinking in with some people. Order can be imported across a boundary but different types of order, even though they obey the same laws, are not interchangeable.

    Now let’s see if you understand the relationship between Shannon (objective) information and subjective (specified) information. As you wrote subjective information can be destroyed. The law of entropy assures us it will be destroyed over time. But here’s the kicker, as subjective information is destroyed objective information increases. A maximally loaded Shannon information channel is completely random. If it can be compressed then it isn’t carrying the maximum amount of objective information. You understand that well enough. If there is any subjective information in the channel it means there’s a pattern in it and the pattern by definition makes it less than completely random. So in that sense the thermodynamic law of conservation of energy – energy is neither created nor destroyed but only changes form – holds true for information as well. Information cannot be created nor destroyed but only changes form. In this case the law of entropy changes its form from subjective to objective. To go the other way, opposite to the law of increasing entropy, requires importing subjective order across the boundary in an open system just as we must import carbon order to increase carbon order and we can’t exchange thermal order for carbon order.

    If you’ve followed along so far then we come to an important question. If the only way to increase subjective order in an open system is by importing subjective order across a boundary what is the source of that imported subjective order? The only reasonable thing I can think of is it must have come from a like subject. I can put a book loaded with subjective information in front of my dog but he won’t understand it. That’s because he’s not a subject like myself. Following that if we find subjective order in the universe and the source wasn’t human then it seems like it must then at least come from a subject something like ourselves. If we were created in God’s image then that satisfies the requirement for a like subject. Or maybe it’s an evolved intelligence. Whatever it is it must be something like ourselves or we wouldn’t be able to discern the subjective information just like my dog can’t discern the information in a book.

    I’m not sure which prior century the Darwinians are pulling their understanding of the laws of thermodynamics from but it sure isn’t the 20th or 21st centuries. Those laws were generalized a long time ago from thermal order to other types of order and in the latter 20th century were generalized to objective ( Shannon)information as well. I don’t know if they’ve been generalized to include subjective information before but I just did and given the ease of doing it I can’t imagine I’m the first.

    Now it may very well be that there are unknown physical laws in operation which cause intelligent life to emerge from them just like the known physical laws that produce snowflakes and other crystalline order from unordered matter but until those laws can be described they are nothing more than flights of fantasy and I will readily concede that Darwinian chance worshippers are experienced pilots in the fantasy airline.

  20. 20

    Hi Dave,

    A maximally loaded Shannon information channel is completely random.

    I think I have to disagree here. A maximally loaded information channel only seems random in isolation because all of the order to which it refers is now kept “offshore.” As I see it, the channel must be considered ordered (non-random) because it is, in every specified bit, a crucial component of a larger system (order).

  21. 21
    Turner Coates says:

    18:

    Tim, exactly what “objective” messages are you receiving when you read maximum entropy random coin flips — or disordered particle positions?

    May I answer? What message could be more objective than a description of the state of an object?

  22. 22
    DaveScot says:

    William

    The information stream is maximized when it cannot be perfectly described in fewer bits than the stream contains. If it contains encrypted information which appears to be random then the description of the encryption code becomes part of the stream (no free lunch). For example, we can’t mutually agree that preselected random codes in an information stream refer to encyclopedia articles without incorporating both the codebook and the articles to which they refer as part of the information stream. You’re basically saying that hidden channels don’t count. Hidden channels do count.

  23. 23
    DLH says:

    Part of the confusion over “order” is that in English it is used in two senses:

    One is “order” due to physical laws, as in a crystal.

    The other sense is order in the send of “ordering” a sequence from low to high integers etc.

    Better to use Natural law for the first and Specified Information for the second. etc.

    Shannon “information” is being referred to as “information entropy”.

    It can be thought of as the capacity to carry a signal, and not the information within the signal itself.

    So Timothy Reeves
    “Subjective information” is better termed “Specified Information”

    and “Objective information” better called “information entropy”.

  24. 24
    Timothy V Reeves says:

    Sorry Dave, but I hope you’ve got time for a little more explaining because, evolution or no evolution, Granville’s work isn’t sinking into to my skull: I can make little sense of it when applied to the relatively prosaic process of crystal formation.
    .
    In crystal formation a domain of high order (albeit simple) develops and the containing solution warms a little. Therefore the entropy books report a gain greater than or equal to zero because the reduction in w incurred by the crystal formation is more than compensated for by the increase in w caused by the solution warming. (Where w = number of microscopic states consistent with the macroscopic state as defined by temperature, volume, pressure etc, and where S = k log(w)).
    .
    In the context of crystal formation I find difficulty trying to apply Granville’s notion of explicit ‘information’ crossing a domain boundary bringing order to the system. Fair enough, given that Granville supports ID, I understand why he believes that direct Divine information input is required to bring about the extravagantly complex and ordered systems of life. But the organizational ‘low end’ doesn’t need to crib directly from the Divine mind. In crystal formation no books on crystallography cross into the crystal domain telling the atoms how to organize. Crystals crystallize because the physical regime providentially provides for a relatively simple morphospace containing ‘contours of stability’, ‘ratchets of formation’, or ‘Dawkins slopes’ or whatever you want to call them. In this process there are no imports of explicit information across the boundary, but there is an export of randomized kinetic energy (that is, heat) maintaining a positive overall value in the entropy increment. A similar analysis may carried out on the heat pouring in from the Sun to the Earth. The physics of the Earth system uses the low entropy of the sun/outer space temperature gradient to produce organized forms of energy, namely kinetic energy (winds) and potential energy (clouds). So pumping in heat into something can produce order at least at the organizational low end.
    .
    That the conjectured evolution of life represents a local increase in order is not in and of itself enough to rule out evolution – as we have seen crystal formation entails a local increase of order. In crystallization this order is not explicitly imported across the boundary but is inherent in atomic forces, with trial and error matches being made by the random agitations of atoms and locked in by these forces if a match is found. An exhaust of waste heat to the general environment offsets the entropy decrease entailed by the crystallization. Hence, observations of local increases in order (whether from crystallization or evolution) are not in themselves violations of the second law. This is not to say that one can’t rule out evolution on other grounds such as IC (which states that the contours of stability in morphospace enabling organisms to ‘crystallize out’ in stages don’t exist)
    .
    I’m sorry Dave but I don’t understand the relationship between subjective and objective information as you have described it. If we represent subjective information by SI and objective information by OI, are you saying that a conservation law of form d(SI) + d(OI) = 0 holds? If I have understood you correctly then this fails to make sense to me, because I can conceive circumstances where both d(SI) and d(OI) increase. Take a transmitted binary sequence: Here the macroscopic parameters are the statistical frequency profiles. If during the transmission of the signal the frequency profiles change and start to approximate closer to that produced by coin tossing, then clearly d(OI) is positive. Now, if all we know about the sequence are its statistical profiles then it is true that subjective information decreases because the known macroscopic parameters provide an envelope that is consistent with a much larger number, w, of possible sequence configurations. But if as the frequency profiles change to a more disordered regime we simultaneously start to read the sequence bit by bit then that means that d(SI) has also increased and hence d(SI) + d(OI) != 0. Please note that when reading the sequence a ‘hidden’ channel of interpretive information is not a necessary concomitant: If one is reading a random sequence, it need not necessarily have further meaning; the configuration of bits may be all one wants to know and blow any conjectured ‘interpretation’.
    .
    The assumption that subjective information entails pattern is not true. Pattern entails compressibility of knowledge, but let me repeat: subjective information doesn’t entail pattern; we may have a brain like the memory man and be capable of memorizing a book of random numbers. Therefore subjective knowledge is not what we are ultimately interested in here. What interests us here are those objective patterns called organisms whether we know about them or not.
    .
    You concede, Dave, that there may be unknown laws capable of generating organisms. This may be true, but there is also another option, although IC explicitly denies it; namely, that the morphospace implicit in the regime of physical laws we already know does have contours of stability running through it all the way up to organic forms, thus enabling these forms to ‘crystallize out’ in stages. What I am saying is that perhaps the ‘unknown laws’ which you admit could in principle be capable of generating life are already known to us! But of course this is where the real debate starts. I have to confess that although I can make some rather general and abstract statements about this issue, as I’m not a paleontologist, natural historian, or biochemist, I can’t argue very cogently about it one way or the other. But I’ll try!

  25. 25
    kairosfocus says:

    A footnote:

    I here excerpt Harry Robertson’s Statistical Thermophysics, as that is very relevant to the above:

    ++++++++

    . . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . . [pp. vii – viii]

    . . . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability should be seen as, in part, an index of ignorance] . . . .

    [deriving informational entropy . . . .]

    S({pi}) = – C [SUM over i] pi*ln pi, [. . . “my” Eqn A.4]

    [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp – beta*yi) = Z [Z being in effect the partition function across microstates, the “Holy Grail” of statistical thermodynamics]. . . .[pp.3 – 6]

    S, called the information entropy, . . . correspond[s] to the thermodynamic entropy, with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context [p. 7] . . . .

    Jayne’s [summary rebuttal to a typical objection] is “. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.” . . . . [p. 36.]

    [Robertson, Statistical Thermophysics, Prentice Hall, 1993. (NB: Sorry for the math and the use of text for symbolism. However, it should be clear enough that Roberson first summarises how Shannon derived his informational entropy [though Robertson uses s rather than the usual H for that information theory variable, average information per symbol], then ties it to entropy in the thermodynamic sense using another relation that is tied to the Boltzmann relationship above. This context gives us a basis for looking at the issues that surface in prebiotic soup or similar models as we try to move from relatively easy to form monomers to the more energy- and information- rich, far more complex biofunctional molecules.)]

    ++++++++

    I trust this is helpful.

    The key relevance of this to the design inference — and BTW, this is not the same as the Divine inference, Tim — is that relevant configurations of the systems of interest yield huge config spaces, with islands of relevant functionality being exceedingly isolated. So, when searches based on chance + necessity only engage in in effect random walks from initial arbitrary conditions, they most likely by overwhelming statistical weight of non functional macrostates, will never find the minimally functional shores of an island of functionality. On the gamut of the observed cosmos across its lifespan. [Cf my discussion of the nanobots and microjets in APP 1 to the always linked.]

    That means that hill-climbing algorithms won’t get a chance to start working off competitive degrees of functionality.

    When it comes to crystals and snowflakes etc, we already can see that there is an ordering natural regularity which works with circumstances to create order [regularity] and perhaps chance-based variability, e.g. the dendritic hexagonal snowflakes so beloved of photographers. But, to encode functional informaiton of complexity levant to what we need for say life, we are not looking at such oirder but at complex organisation according to codes and machinery to express the codes physically.

    Information metrics are relevant to the capacity of channels and storage entities to carry such codes. But that is not a metric of the functionality of what is for the moment in the channel or in the storage unit. Nor will lucky noise substiturte for intelligent action, as we know fromt eh law of sampling that the vast majhority of samples of a population will reflect its typical cross section. That is, since non functional states are in the relevant contexts overwhelmingly dominant, we end up in the non-functional macrostate aqnd run out of probabilistic resources ont hegamut of the observed cosmos before we can cedibly access functionality through lucky noise.

    Onthe other hand intelligent agents routinely use understanding to design and implement funcitonality rich codes that go beyond 500 – 1,000 bits, i.e the reasonable observed cosmos level threshold for discoverability of even large islands of functionality by chance process.

    GEM of TKI

  26. 26
    kairosfocus says:

    TVR, DLH et al and Onlookers:

    A further footnote on the second law of thermodynamics and its relevance to the design inference.

    For, I find the following from TVR, no. 24 supra, interesting; indeed, inadvertently revealing:

    That the conjectured evolution of life represents a local increase in order is not in and of itself enough to rule out evolution – as we have seen crystal formation entails a local increase of order. In crystallization this order is not explicitly imported across the boundary but is inherent in atomic forces, with trial and error matches being made by the random agitations of atoms and locked in by these forces if a match is found. An exhaust of waste heat to the general environment offsets the entropy decrease entailed by the crystallization. Hence, observations of local increases in order (whether from crystallization or evolution) are not in themselves violations of the second law. This is not to say that one can’t rule out evolution on other grounds such as IC (which states that the contours of stability in morphospace enabling organisms to ‘crystallize out’ in stages don’t exist) . . .

    1 –> Let’s go back, to Thaxton et al, TMLO ch 8, i.e. 1984 [and citing Yockey, Wickens et al from the 70’s – early 80’s; cf here, too, appendix 3, my always linked]:

    TMLO ch 8: Peter Molton has defined life as “regions of order which use energy to maintain their organization against the disruptive force of entropy.”1 In Chapter 7 it has been shown that energy and/or mass flow through a system can constrain it far from equilibrium, resulting in an increase in order. Thus, it is thermodynamically possible to develop complex living forms, assuming the energy flow through the system can somehow be effective in organizing the simple chemicals into the complex arrangements associated with life.

    In existing living systems, the coupling of the energy flow to the organizing “work” occurs through the metabolic motor of DNA, enzymes, etc. This is analogous to an automobile converting the chemical energy in gasoline into mechanical torque on the wheels. We can give a thermodynamic account of how life’s metabolic motor works. The origin of the metabolic motor (DNA, enzymes, etc.) itself, however, is more difficult to explain thermodynamically, since a mechanism of coupling the energy flow to the organizing work is unknown for prebiological systems . . . .

    “a periodic structure has order. An aperiodic structure has complexity.” . . . .

    “Nucleic acids [i.e. DNA, RNA] and protein are aperiodic polymers, and this aperiodicity is what makes them able to carry much more information.” . . . .

    “only certain sequences of amino acids in polypeptides and bases along polynucleotide chains correspond to useful biological functions.” . . . .

    [Orgel:] “Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.”6 [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. This seems to be the first use of the term specified complexity, i.e it is a natural emergence from OOL research in the 1970’s, not a suspect innovation of the ID movement circa 1990s] . . . .

    Yockey7 and Wickens5 develop the same distinction, that “order” is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, “organization” refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future.

    2 –> That is, the relevant distinction has been made in the relevant literature at least 24 – 35 years ago. Crystallisation [or formation of hurricanes or the like] is simply utterly distinct from code-bearing complex functional organisation that works algorithmically. So much so that the persistence of such an irrelevant argument reflects rhetorical, closed minded objectionism through the use of red herrings to lead out to handy, set up, oil-soaked strawmen that can be ignited to distract attention, and cloud and poison the atmosphere of discussion; rather than any serious issue. (And red herring, strawman and atmosphere poisoning arguments “work” in the sense of distracting attention and diverting from the merits on an issue; that is why they are so often resorted to by those who cannot answer an issue on its merits.) TVR, and others, let us not fall for such diversionary — and too often not merely delusional but, sadly, outright deceptive — tactics.

    3] For, the question is not whether naturally occurring boundary conditions may through natural regularities [such as ionic attraction or the like] foster the emergence of order. It is whether FSCI-rich algorithmic systems can spontaneously emerge without the commonly observed causative force for such systems, intelligent action.

    4 –> And, for that, we see a major obstacle, namely that once we cross the threshold of 500 – 1,000 bits of information storage, we are dealing with config spaces with over 10^300 cells, which suffices to so confine islands of functionality that a random walk based search process that begins at an arbitrary initial point, by overwhelming probability on the gamut of the observed cosmos, will start and terminate in nonfunctional states, never once traversing such an island of functionality. Thus, the force of Dembski and Marks’ point that the critical success factor in searches of such config spaces is the injection of active information, i.e deriving from intelligent, purposeful, insightful, even creative agency. [That is agents DESIGN FSCI-rich algorithms and structures that physically implement them to achieve their goals.]

    5 –> This also means that, relative to chance + necessity only [i.e without active information to move from maximally improbable to highly likely], competitive, funcitonality-improving hill-climbing processes will not be able to get a chance to begin the climb, as minimal functionality is a condition of being able to improve through selection processes. Climbing Mt Improbable must first begin by getting to the island where it is.

    6 –> So, that local increases of order or even organisation do not violate in themselves the classical form of the 2nd law of thermodynamics is a red herring. Indeed, we see such things happening all the time, and our economy is based on it. But, when it comes to FSCI-rich systems and structures, we observe that they are invariably the product of intelligent agents when ever we can directly observe the origination process.

    7 –> The relevant underlying theoretical, scientific point — as I discussed in my always linked appendix 1 [and as I pointed to above in nos 4 and 14 – 15 — is the statistical underpinnings of that second law, i.e, due to overwhelming statistical weight, it is maximally improbable to get to functionally specified complex information by chance + necessity; on the gamut of the observed cosmos.

    8 –> My nanobots and microjets thought expt shows why.

    GEM of TKI

  27. 27
    Timothy V Reeves says:

    Thanks Kairosfocus for the replies.
    .
    If I understand you correctly you affirm IC as an evolution stopper when you say, for example:
    .
    The key relevance of this to the design inference — and BTW, this is not the same as the Divine inference, Tim — is that relevant configurations of the systems of interest yield huge config spaces, with islands of relevant functionality being exceedingly isolated. So, when searches based on chance + necessity only engage in in effect random walks from initial arbitrary conditions, they most likely by overwhelming statistical weight of non functional macrostates, will never find the minimally functional shores of an island of functionality.
    .
    I’ll certainly be interested to know if you have made analytical progress in the representation of those huge configuration spaces, and can prove that the functionality we see in organic structures actually represents isolated regions of functionality. However, as I personally haven’t developed the analytical where-with-all to analyze these spaces and as this comment thread is ostensively about the second law (and not IC) I thought it more appropriate here to investigate the much more analytically amenable second law, and try to establish if there really is, as Granville’s seems to be claiming, an evolution stopper here. Therefore, it is the second law I would like to focus on in this context.
    .
    Of the second law you yourself say:
    .
    So, that local increases of order or even organisation do not violate in themselves the classical form of the 2nd law of thermodynamics is a red herring. Indeed, we see such things happening all the time, and our economy is based on it.
    .
    What are you saying here? Are you saying that the second law is not an evolution stopper after all and that perhaps we should move onto IC? – a point (if that is what you are saying) with which I am inclined to agree.
    .
    The problem with the second law is that it deals only with a rather crude measure w (where S = k log w), although as your quotes affirm this is a ‘perfectly objective quantity’. Crystals have a much lower w and therefore a much higher order than organisms for the simple reason that there are far more ways of being an organism than a crystal, although, of course, organisms are still highly ordered in a relative sense.
    .
    But organisms are not just highly ordered they also have another quality called ‘complexity’, a quality that is difficult characterize and pin down quantitatively. I myself have been aware of the question of ‘complexity’ (as opposed to simple order) since the seventies, which puts me back well into your 25-35 year range! Perhaps something like ‘mutual information’ or some other quantity might capture the essence of organized complexity, but the fact is that the second law deals only in w, plain and simple, and as such doesn’t, as far as I can see, act as a bar to evolution …. Unless, of course, someone has developed a much more sophisticated rendering of the second law that is sensitive to things like ‘complexity’, ‘morphospace’, ‘functional isolation’ etc. I would like to know if anyone has.
    .
    I want to know the answer to this question: Does the second law as it currently stands (or at least as I understand it) constitute an evolution stopper by itself? (In this connection I’ll have a look at your appendix) I don’t want to look as though I am deliberately obstructive, awkward or a machiavellian character with an ulterior agenda of motives, but that is probably a hazard I’ll have to face because as I have said before I’ve landed in war zone where suspicions are rampant and emotions are running high. In such a zone one is carefully and suspiciously read for one’s true allegiance.

  28. 28
    DLH says:

    TMV
    Try thinking through what is required to read/write information onto your RAM, hard drive, or CD.
    Contrast that with how a crystal is formed.

    For information to be recorded, the material must be able to be physically changed to reflect a 0 or 1; e.g. high/low magnetism, or optical reflection, or charge etc.

    There are no physical laws or “self organization” that “require” any specific coded message. To the contrary, any “self organization” prevents coded information from being recorded.

    What will happen if you put your CD or hard drive or RAM in the oven and “bake” at 450F for 3 hours.

    Will that process record your cookbook since the food was previously cooked in that oven?

    Or will it destroy any recipes you might have had recorded on that storage media?

    Entropy will destroy coded information.
    Entropy cannot create coded information.

    That is the practical colloquial heart of the issue as to why entropy is degrading of biotic systems, not “creating” them. Thus entropy is a show stopper for evolution when recognized for this difference between coded information vs self organization.

    The rest is working out math details and explanations, probabilities etc.

    Look again at Granville’s work in light of coded information vs self “ordering” by natural law.

    An encyclopedia does not come about by random mutation and natural selection.

  29. 29
    Timothy V Reeves says:

    Sorry to stretch your patience DLH. I’ll take another look at Granville’s work, but let me just summarize my current position:
    .

    1. w (as in S = k log w) is not fine tuned enough a parameter to pick up all the subtleties of complex organization. As far as w is concerned organisms are structures more disordered than crystals but less is disordered than gases; that’s all that w ‘knows’ about. To ‘w’ organisms look like ‘fancy crystals’.
    .

    2. No one is suggesting that entropy creates anything. It is only a bulk measure that in isolated systems increases with time. Yes, of course, advancing entropy ultimately disrupts all forms of order, crystals as well as humans, but fine distinctions of structure are not part of the remit of a gross a quantity that tells us very little about the details as the system runs down; the run down entailed by dw/dt greater than zero doesn’t eliminate local increases of order. And remember, as far as entropy is concerned organisms are just ordered lumps of matter – it doesn’t measure our intuitive notion of organized complexity or ‘mutual information’.
    .
    3. Ovens create a temperature gradient and a temperature gradient entails an order that can quickly be converted into organized forms of energy, like kinetic or potential. (e.g. weather systems).
    .
    4. DLH, unless you are successfully reading the subtext, I’m not, repeat not, repeat not, making here any claims in this particular connection about ‘self organization’ or even whether evolution has actually occurred or not (IC might well stop the whole show). All I am saying is that the second law, as it stands, is too blunt an instrument to eliminate evolution. I would be saying these things even if I became convinced of ID.
    .
    5. Your examples of information media cookery (must try it when I fancy a byte or two) may elicit an intuitive gut reaction about the implausibility of evolution, but I’m not, repeat not, here referring to the likelihood of evolution. I’m only commenting on the soundness of using the second law as an evolution stopper. In any case there are gut reactions and gut reactions. The SETI people look into the sky and say “Look at all those stars, life must be out there somewhere!” Gut reactions may convey valid information but they must be treated with caution.
    .
    Once again sorry for stretching everyone’s patience. (Even Gpuccio has given up on me!) I’ll have another look at Granville’s work and see where I am going wrong. It is always possible that Granville is working with an upgraded concept of entropy that picks out the more subtle features of complex organization/mutual information. In this connection what we require is a quantity, call it ‘X’ (perhaps looking a bit like mutual information) measuring organized complexity, that peaks for organisms, somewhere between w = 0 and w = maximum. But that is just a shot from the hip, so don’t take it too seriously.

  30. 30
    kairosfocus says:

    Okay:

    Finally able to access UD today.

    TVR, re: w (as in S = k log w) is not fine tuned enough a parameter to pick up all the subtleties of complex organization

    1 –> Last time I checked w [strictly omega, but I understand Boltz used W . . .] is the number of microstates compatible with a macrostate, i.e it is the statistical weight of a given macrostate. (Onlookers: Macrostates in effect are coarse-/ lab- scale observable states, which usually leave underlying microstates to vary across a wide array that can in principle be counted or at least estimated.)

    2 –> When we look at micro-level configurations of matter and energy, we can indeed define a wider config space in which the relevant functional and non-functional macrostates occur with their relevant weights.

    3 –> It is intuitively obvious that non-functional states overwhelm functional ones, as is a commonplace.

    4 –> Doubtless, you will be familiar with how the statistical form of the 2nd law is set up through comparative weight counts driving probabilities of states, and indeed how s = k ln w is set up by partitioning a body with w states into 2 parts with microstates w1 and w2, leading to a log measure. [Onlookers, an excellent source is Nash’s Elements of Stat Thermo-D. I only wish I had had a little less Physics arrogance when I was an undergrad and had been willing to learn form a Chemist!]

    5 –> Indeed, the point on likelihood of being in different configs at random in effect, is the root of Sir Fred Hoyle’s 747 in a junkyard remartk which I took down to semi molecular scale in my always linked point 6 appendix a. [Onlookers observe how studiously TVR avoids addressing this. And FYI TVR, the rough cell splitting and counting I do there is similar to that done by one certain Josiah Willard Gibbs.]

    6 –> By looking at my discussion as just linked and how it interacts with Thaxton et al’s TMLO chs 7 – 9, you will see how Brillouin’s negentropy view and metric of information can be used to analyse the movement from [a] a scattered at random state to [b] a clumped at random state to [c] a functional state. In each case, the number of ways matter and relevant energy are configured becomes increasingly confined as we move from one state to the next so the number of available configurations from the overall config space falls.

    7 –> Thus, w falls twice in succession, and so entropy falls/ Brillouin information metric [recall there are several valid metrics of information] rises as work is done on the originally chaotic matter in TBO’s prebiotic soup or my vat of pre-microjet liquid.

    8 –> In effect we are undoing diffusion, which is an entropy-increasing process, through increasing volumetric confinement of particles: in my thought expt, from 1 cu m to about 1 cu cm then to a much smaller possibility of order 10-6 cu m for individual parts to get to a functional config. I used 1 micron cubed cells to do rough state counts, to make the point [the cells are too coarse but that is good enough for the purpose in view].

    So, sorry, TVR: I HAVE done the counts. And, TBO in TMLO did the thermodynamics oh about 25 years back, and Bradley has updated since in more modern terms using Cytochrome C.

    The message is in the end simple, though too often hard to accept: functioning configs are very rare and isolated [even if clustered as islands] in the overall config space available to monomers comprising life functional molecules then the living cells that use these molecules to function. So much so that there is a considerably complex mechanism to form cells and make them work, based on algorithmic codes.

    For DNA, to pick just one entity, that is to be found in strings from about 300 – 500 000 up. A 300k base pair molecule has an available clustered config space (as the info is not stored in the chemistry of chaining) of 4^300k ~ 9.94*10^180,617.

    The number of quantum states of our cosmos across its lifespan is about 10^150, i.e the number of states it can access. If we allow islands of functionality of 10^150 states, and we allow 10^1000 of them [vastly more than the number of individual dna based life forms that could ever exist in our observed cosmos] there would be 10^1150 such states, in 10^1000 islands of 10^150 states.

    With a space of 9.9 * 10^180,617 config states, the islands of function will be vastly inaccessible to any random walk initiating process. And, that is assuming we can naturally synthesise the monomers, separate the chiralities and chain to the required lengths.

    Sorry, I know empirically that intelligent agents, using understanding and algorithmic processes can create entities that are that isolated in such config spaces. So if offered the choice of believing that chance + necessity got us to life from some prebiotic soup or other, or one or more intelligent agent[s] did it, the choice is obvious, save to those who are committed to the impossibility of such agents at the required place and time.

    As a matter of fact, that we are so fearfully and wonderfully made is itself strong testimony to such agent[s] at the time and place in quesiton, once one has not closed mindedly begged the question. For, we have a known capable process vs a known incapable process (by overwhelming improbability). Teh only out for the latter, would be to postiulate an unobserved quasi-infinite wider cosmos as a whole embedding a similarly quasi-infinite number of sub cosmi in which the physics and chemistry etc very to give enough probabilistic resources to get the odds down.

    It is therefore no surprise to see this naked ad hoc metaphysical resort on the part of an increasing number of evolutionary materialists. but, we do observe just one cosmos, and it is finite in time, extent and matter, so far as we can observe. So somebody is running away from the empirical evidence when it no longer fits his views and preferences, into ad hoc metaphysical speculation.

    That means that the effective choice is between a quasi-infinite unobserved wider cosmos as a necessary being and an agent capable of cosmogenesis and origination of life.

    Which is the better explanation is not that hard to choose . . . ; – )

    GEM of TKI

  31. 31
    Timothy V Reeves says:

    Let me clarify my perspective Kairosfocus. The ID community suggests that there are two major roadblocks on the conjectured highway of evolutionary development. Viz:
    .
    1. The second Law (The ostensive subject of this comment thread)
    2. Irreducible complexity
    .
    I’m certainly interested in the work of the ID community and wish to find out whether either or both of the above block evolution. Now you are obviously very proud of your work Kairosfocus, but before I give it more than a general perusal I need answers to these questions:
    .
    1. Would you say that your work successfully demonstrates that one or both of the above points are roadblocks on the evolutionary road? Or perhaps you have found other roadblocks?
    .
    2. Where do you fit in the ID constellation? Obviously I need to continue to get to grips with the work of the star performers like Behe, Dembski and Granville, but how does the ID community – especially the contributors to this blog – react to your work? Can you point me to any comments on your work by the ID community? Perhaps they (e.g. Dembski and Granville) could even give me some recommendations in this thread. After all, ‘Self praise is no recommendation’!

    .
    On a technical note, let me confess that I still don’t see why the second law, as I understand it to be formulated, roadblocks evolution. If we take the log of the quantity W = PRODUCT OVER wi (where wi = microstates consistent with the macrostate of subsystem i ) then we can show that the total entropy S of the system is equal to the sum of the entropies over the sub systems. The second law as it stands puts the constraint dS/dt > 0 on the total system. This is too weak a constraint to eliminate decreases in entropy in the subsystems and as entropy does not provide for a one-on-one measure of organized complexity the second law leaves open the question of whether an increase in order in a subsystem is due to the appearance of organized complexity or something else more banal. This is not to say, of course, that from other considerations (such as IC) organized complexity can be shown to be overwhelmingly improbable. The second law is a derivative of probabilities and statistical weighting, but like other derivative products there may be a loss of content in the derivative process and alas for the ID community the second law is not readily reversed to derive probabilities. I have yet to hack into your writings in earnest, but I am interested to see how clearly the concept of organized complexity comes out in your work.
    .
    Many critics of ID treat people like yourself as if you are only worthy of insult and mockery, as I am sure you have experienced. As I endeavor to approach the whole subject by practicing a discipline of studied detachment and fairness I hope I won’t find myself on the receiving end of any spiritual bullying that in some cases is the abrasive ID match to the mockery and insult.

  32. 32
    DLH says:

    TVR at 29 and 31

    All I am saying is that the second law, as it stands, is too blunt an instrument to eliminate evolution. I would be saying these things even if I became convinced of ID.

    Your bluntness objection is worth exploring. At least on trying to explain systems.

    Increasing probability or entropy is the cause for destroying both physical “order” and design information (the computer hard drive etc).

    Evolution arguments given vs entropy point to physical “order” like crystallization.

    However, such local reductions in physical “order” cannot explain formation of CSI.

    Granville’s arguments on “order” entering the system are particularly meaningful regarding CSI as in the encyclopedia etc.

    So the challenge of developing new formulations that clearly distinguish between CSI and physical “order” etc.

    Its on my To Do list.

  33. 33
    DLH says:

    Timothy V. Reeves
    Please read again:
    A Second Look at the Second Law
    Granville Sewell
    Especially:

    But getting the right number on 5 or 6 balls is not extremely improbable, in thermodynamics “extremely improbable” events involve getting the “right number” on 100,000,000,000,000,000,000,000 or so balls! If every atom on Earth bought one ticket every second since the big bang (about 10^70 tickets) there is virtually no chance than any would ever win even a 100-ball lottery, much less this one. And since the second law derives its authority from logic alone, and thus cannot be overturned by future discoveries, Sir Arthur Eddington called it the “supreme” law of Nature [The Nature of the Physical World, McMillan, 1929].

    David Aikman observes:

    Schroeder applied probability theory to the “Monkey Theorem” and calculated that the chance of getting Sonnet Eighteen by chance was 26 multiplied by itself 488 times (488 is the number of letters in the sonnet) or, in base 10, 10 to the 690th. . . . As Flew concluded, “if the theorem [the Monkey Theorem] won’t work for a single sonnet, then of course it’s simply absurd to suggest that the more elaborate feat of the origin of life could have been achieved by chance.”

  34. 34
    Timothy V Reeves says:

    Thanks fo rthat DLH. I’ll do some investigations.

  35. 35
    DLH says:

    Timothy V Reeves

    See also: DLH comment #88 under Does Darwinian Evolution include the Origin of Life.

    I consider the Origin of Life another barrier (or a subset of the Second Law). Darwinian Evolution requires self replicating life for “natural selection.” For the same materialistic assumptions, the Origin of Life is an even greater challenge to Darwinian Evolution, since it cannot rely on “natural selection” to supposedly come up with the very high Complex Specified Information in even the simplest self reproducing cell – which all has to be there and functioning for “evolution” to continue.

  36. 36
    kairosfocus says:

    TVR:

    First, please, deal with the issue, do not attack the man — whether directly or by subtle insinuations.

    [Onlookers: Had TVR taken time to glance at the Appendix 1 he would have seen that it answers, step by step, to key issues based on foundational thermodynamics principles accessible to one who has done a first college physical science course and a similar mathematical course. Indeed, there is even a link to basic presentations of the underlying science. In so doing, it adverts to the Thaxton et al work of 1984 and uses fairly accessible standard results and reasoning in the context of drawing out the implications of thermodynamics and associated statistical mechanics principles, for the claimed OOL and OO body-plan level biodiversity. The issue is the chain of reasoning and evidence, not me and who or what I am. To dodge the issue to attack the man directly or indirectly (by insinuations and loaded language) is to forfeit the issue.]

    Having notes such, I will pause to remark on pints of significance, observing that the below is not a substitute for what I have already linked:

    1] Roadblocks:

    We observe that three causal factors are commonly encountered: chance and/or natural lawlike forces giving rise to natural regularities, agency. Situations of complexity have high contingency not explicable by natural regularities alone. Where we have complex, functionally specified information, chance forces are incapable of accounting for these phenomena on the gamut of the observed cosmos; due to probabilistic resource exhaustion. This is, in a nutshell as well the basic framework for the statistical justification for 2 LOT. So, by the same principles as we use to justify 2 LOT, there is a barrier to OOL and OO body plan level biodiversity

    In cases of entities where multiple components must be fitted together to achieve a function or else it will fail to work, a similar issue obtains: origin of complex [beyond 500 – 1,000 bits of information storage] body plans, synthesis of their components and assembly — even inclusive of co-option of existing parts – is maximally improbable for RV + NS on the scope of the observed cosmos. The Cambrian life revolution is a capital case in point, where there is need to account for dozens of phyla and sub-phyla at once, within a short window on earth.

    2] Who’s you . . .

    And, Mr Reeves, who are YOU?

    More to the point, I am a scientist and science educator in my own right who has looked at the issue for himself. I share my reasoning and conclusions, and discuss them in especially this blog with its many participants, a significant number of whom have regarded my remarks as a valuable contribution on the merits.

    I invite you to address these issues on the said merits.

    3] The second law as it stands puts the constraint dS/dt > 0 on the total system.

    Correct. As perusal of Clausius’ first example will show, as I discuss in App 1 as always linked, a hotter subsystem giving up d’Q to a cooler one will undergo entropy loss overbalanced by the entropy rise in the cooler system. But this immediately implies that an energy-importing system naturally tends to INCREASE its entropy.

    The way around that, is to go to systems that couple input energy to do work, exhausting waste heat in the process so that overall entropy rises even as local order is created. As I discussed in the always linked (bringing the Mountain to Mohammed . . .):

    2] But open systems can increase their order: This is the “standard” dismissal argument on thermodynamics, but it is both fallacious and often resorted to by those who should know better. My own note on why this argument should be abandoned is:

    a] Clausius is the founder of the 2nd law, and the first standard example of an isolated system — one that allows neither energy nor matter to flow in or out — is instructive, given the “closed” subsystems [i.e. allowing energy to pass in or out] in it. Pardon the substitute for a real diagram, for now:

    Isol System:

    | | (A, at Thot) –> d’Q, heat –> (B, at T cold) | |

    b] Now, we introduce entropy change

    dS >/= d’Q/T . . . “Eqn” A.1

    c] So, dSa >/= -d’Q/Th, and dSb >/= +d’Q/Tc, where Th > Tc

    d] That is, for system,

    dStot >/= dSa + dSb >/= 0, as Th > Tc . . . “Eqn” A.2

    e] But, observe: the subsystems A and B are open to energy inflows and outflows, and the entropy of B RISES DUE TO THE IMPORTATION OF RAW ENERGY.

    f] The key point is that when raw energy enters a body, it tends to make its entropy rise. For the injection of energy to instead do something useful, it needs to be coupled to an energy conversion device.

    g] When such devices, as in the cell, exhibit FSCI, the question of their origin becomes material, and in that context, their spontaneous origin is strictly logically possible but negligibly different from zero probability on the gamut of the observed cosmos. (And, kindly note: the cell is an energy importer with an internal energy converter. That is, the appropriate entity in the model is B and onward B’ below. Presumably as well, the prebiotic soup would have been energy importing, and so materialistic chemical evolutionary scenarios therefore have the challenge to credibly account for the origin of the FSCI-rich energy converting mechanisms in the cell relative to Monod’s “chance + necessity” [cf also Plato’s remarks] only.)

    h] Now, as just mentioned, certain bodies have in them energy conversion devices: they COUPLE input energy to subsystems that harvest some of the energy to do work, exhausting sufficient waste energy to a heat sink that the overall entropy of the system is increased. Illustratively, for heat engines:

    | | (A, heat source: Th): d’Qi –> (B’, heat engine, Te): –>
    d’W [work done on say D] + d’Qo –> (C, sink at Tc) | |

    i] A’s entropy: dSa >/= – d’Qi/Th

    j] C’s entropy: dSc >/= + d’Qo/Tc

    k] The rise in entropy in B, C and in the object on which the work is done, D, say, compensates for that lost from A. The second law holds for heat engines.

    l] However for B since it now couples energy into work and exhausts waste heat, does not necessarily undergo a rise in entropy having imported d’Qi. [The problem is to explain the origin of the heat engine — or more generally, energy converter — that does this, if it exhibits FSCI.]

    m] There is also a material difference between the sort of heat engine [an instance of the energy conversion device mentioned] that forms spontaneously as in a hurricane [directly driven by boundary conditions in a convective system on the planetary scale, i.e. an example of order], and the sort of energy conversion device found in living cells [the DNA-RNA-Ribosome-Enzyme system, which exhibits massive FSCI].

    n] In short, the root problem is the ORIGIN of such a FSCI-based energy converter through causal mechanisms traceable only to chance conditions and undirected [non-purposive] natural forces. This problem yields a conundrum for chem evo scenarios, such that inference to agency as the probable cause of such FSCI — on the analogy of the cases where we do directly know the causal story — becomes the better explanation. As TBO say, in bridging from a survey of the basic thermodynamics of living systems in CH 7, to that more focussed discussion in ch’s 8 – 9:

    “While the maintenance of living systems is easily rationalized in terms of thermodynamics, the origin of such living systems is quite another matter. Though the earth is open to energy flow from the sun, the means of converting this energy into the necessary work to build up living systems from simple precursors remains at present unspecified (see equation 7-17). The “evolution” from biomonomers of to fully functioning cells is the issue. Can one make the incredible jump in energy and organization from raw material and raw energy, apart from some means of directing the energy flow through the system? In Chapters 8 and 9 we will consider this question, limiting our discussion to two small but crucial steps in the proposed evolutionary scheme namely, the formation of protein and DNA from their precursors.

    It is widely agreed that both protein and DNA are essential for living systems and indispensable components of every living cell today.11 Yet they are only produced by living cells. Both types of molecules are much more energy and information rich than the biomonomers from which they form. Can one reasonably predict their occurrence given the necessary biomonomers and an energy source? Has this been verified experimentally? These questions will be considered . . . [Cf summary in the peer-reviewed journal of the American Scientific Affiliation, “Thermodynamics and the Origin of Life,” in Perspectives on Science and Christian Faith 40 (June 1988): 72-83, pardon the poor quality of the scan. NB:as the journal’s online issues will show, this is not necessarily a “friendly audience.”]

    4] This is too weak a constraint to eliminate decreases in entropy in the subsystems and as entropy does not provide for a one-on-one measure of organized complexity the second law leaves open the question of whether an increase in order in a subsystem is due to the appearance of organized complexity or something else more banal.

    Already addressed, cf supra.

    5] The second law is a derivative of probabilities and statistical weighting, but like other derivative products there may be a loss of content in the derivative process

    Kindly cf my thought experiment at point 6 in the same appendix as already linked. One may show that there is a fall in w as we move from scattered to clumped at random to functionally configured states, and that the fall is so large in each case that the basic point is plain.

    6] I hope I won’t find myself on the receiving end of any spiritual bullying that in some cases is the abrasive ID match to the mockery and insult.

    I have invited discussion on the merits; why are you presuming or suggesting that I would set out to insult and attack unprovoked?

    [Onlookers: is such behaviour by TVR not simply a subtler form of ad hominem?]

    I again invite discussion on the merits

    GEM of TKI

  37. 37
    kairosfocus says:

    PS: I might as well add in some remarks on the stat thermo-D form of 2 LOT:

    4] Yavorski and Pinski, in the textbook Physics, Vol I [MIR, USSR, 1974, pp. 279 ff.], summarise the key implication of the macro-state and micro-state view well: as we consider a simple model of diffusion, let us think of ten white and ten black balls in two rows in a container. There is of course but one way in which there are ten whites in the top row; the balls of any one colour being for our purposes identical. But on shuffling, there are 63,504 ways to arrange five each of black and white balls in the two rows, and 6-4 distributions may occur in two ways, each with 44,100 alternatives. So, if we for the moment see the set of balls as circulating among the various different possible arrangements at random, and spending about the same time in each possible state on average, the time the system spends in any given state will be proportionate to the relative number of ways that state may be achieved. Immediately, we see that the system will gravitate towards the cluster of more evenly distributed states. In short, we have just seen that there is a natural trend of change at random, towards the more thermodynamically probable macrostates, i.e the ones with higher statistical weights. So “[b]y comparing the [thermodynamic] probabilities of two states of a thermodynamic system, we can establish at once the direction of the process that is [spontaneously] feasible in the given system. It will correspond to a transition from a less probable to a more probable state.” [p. 284.] This is in effect the statistical form of the 2nd law of thermodynamics. Thus, too, the behaviour of the Clausius isolated system above is readily understood: importing d’Q of random molecular energy so far increases the number of ways energy can be distributed at micro-scale in B, that the resulting rise in B’s entropy swamps the fall in A’s entropy. Moreover, given that FSCI-rich micro-arrangements are relatively rare in the set of possible arrangements, we can also see why it is hard to account for the origin of such states by spontaneous processes in the scope of the observable universe. (Of course, since it is as a rule very inconvenient to work in terms of statistical weights of macrostates [i.e W], we instead move to entropy, through s = k ln W. Part of how this is done can be seen by imagining a system in which there are W ways accessible, and imagining a partition into parts 1 and 2. W = W1*W2, as for each arrangement in 1 all accessible arrangements in 2 are possible and vice versa, but it is far more convenient to have an additive measure, i.e we need to go to logs. The constant of proportionality, k, is the famous Boltzmann constant and is in effect the universal gas constant, R, on a per molecule basis, i.e we divide R by the Avogadro Number, NA, to get: k = R/NA. The two approaches to entropy, by Clausius, and Boltzmann, of course, correspond. In real-world systems of any significant scale, the relative statistical weights are usually so disproportionate, that the classical observation that entropy naturally tends to increase, is readily apparent.)

    Now, put this to work:

    i] Consider the assembly of a Jumbo Jet, which requires intelligently designed, physical work in all actual observed cases. That is, orderly motions were impressed by forces on selected, sorted parts, in accordance with a complex specification. (I have already contrasted the case of a tornado in a junkyard that it is logically and physically possible can do the same, but the functional configuration[s] are so rare relative to non-functional ones that random search strategies are maximally unlikely to create a flyable jet, i.e. we see here the logic of the 2nd Law of Thermodynamics, statistical thermodynamics form, at work. [Intuitively, since functional configurations are rather isolated in the space of possible configurations, we are maximally likely to exhaust available probabilistic resources long before arriving at such a functional configuration or “island” of such configurations (which would be required before hill-climbing through competitive functional selection, a la Darwinian natural Selection could take over . . . ); if we start from an arbitrary initial configuration and proceed by a random walk.])

    ii] Now, let us shrink the Hoylean example, to a micro-jet so small [~ 1 cm or even smaller] that the parts are susceptible to Brownian motion, i.e they are of about micron scale [for convenience] and act as “large molecules.” . . . Let’s say there are about a million of them, some the same, some different etc. In principle, possible: a key criterion for a successful thought experiemnt. Next, do the same for a car, a boat and a submarine, etc.

    iii] In several vats of “a convenient fluid,” each of volume about a cubic metre, decant examples of the differing mixed sets of nano-parts; so that the particles can then move about at random, diffusing through the liquids as they undergo random thermal agitation.

    iv] In the control vat, we simply leave nature to its course.

    Q: Will a car, a boat a sub or a jet, etc, or some novel nanotech emerge at random? [Here, we imagine the parts can cling to each other if they get close enough, in some unspecified way, similar to molecular bonding; but that the clinging force is not strong enough at appreciable distances [say 10 microns or more] for them to immediately clump and precipitate instead of diffusing through the medium.]

    ANS: Logically and physically possible (i.e. this is subtler than having an overt physical force or potential energy barrier blocking the way!) but the equilibrium state will on statistical thermodynamics grounds overwhelmingly dominate — high disorder.

    Q: Why?

    A: Because there are so many more accessible scattered state microstates than there are clumped-at -random state ones, or even moreso, functionally configured flyable jet ones . . . .

    v] Now, pour in a cooperative army of nanobots into one vat, capable of recognising jet parts and clumping them together haphazardly. [This is of course, work, and it replicates bonding at random. Work is done when forces move their points of application along their lines of action. Thus in addition to the quantity of energy expended, there is also a specificity of resulting spatial rearrangement depending on the cluster of forces that have done the work . . . .

    Q: After a time, will we be likely to get a flyable nano jet?

    A: Overwhelmingly, on probability, no. (For, the vat has ~ [10^6]^3 = 10^18 one-micron locational cells, and a million parts or so can be distributed across them in vastly more ways than they could be across say 1 cm or so for an assembled jet etc or even just a clumped together cluster of micro-parts. [a 1 cm cube has in it [10^4]^3 = 10^12 cells, and to confine the nano-parts to that volume obviously sharply reduces the number of accessible cells consistent with the new clumped macrostate.] But also, since the configuration is constrained, i.e. the mass in the microjet parts is confined as to accessible volume by clumping, the number of ways the parts may be arranged has fallen sharply relative to the number of ways that the parts could be distributed among the 10^18 cells in the scattered state . . . .

    vi] For this vat, next remove the random cluster nanobots, and send in the jet assembler nanobots. These recognise the clumped parts, and rearrange them to form a jet, doing configuration work. (What this means is that within the cluster of cells for a clumped state, we now move and confine the parts to those sites consistent with a flyable jet emerging. That is, we are constraining the volume in which the relevant individual parts may be found, even further.) A flyable jet results — a macrostate with a much smaller statistical weight of microstates. We can see that of course there are vastly fewer clumped configurations that are flyable than those that are simply clumped at random, and thus we see that the number of microstates accessible due to the change, [a] scattered –> clumped and now [b] onward –> functionally configured macrostates has fallen sharply, twice in succession. Thus, by Boltzmann’s result s = k ln W, we also have seen that the entropy has fallen in succession as we moved form one state to the next, involving a fall in s on clumping, and a further fall on configuring to a functional state; dS tot = dSclump + dS config. [Of course to do that work in any reasonable time or with any reasonable reliability, the nanobots will have to search and exert directed forces in accord with a program, i.e this is by no means a spontaneous change, and it is credible that it is accompanied by a compensating rise in the entropy of the vat as a whole and its surroundings. This thought experiment is by no means a challenge to the second law. But, it does illustrate the implications of the probabilistic reasoning involved in the microscopic view of that law, where we see sharply configured states emerging from much less constrained ones.]

    So, by scaling down Sir Fred Hoyle’s 747 by a tornado in a junkyard remarks, we can see how the stat mech principles underlying the 2 LOt apply to OOL.

    On OO body plan level biodiversity:

    viii] Now, let us go back to the vat. For a large collection of vats, let us now use direct microjet assembly nanobots, but in each case we let the control programs vary at random a few bits at a time -– say hit them with noise bits generated by a process tied to a zener noise source. We put the resulting products in competition with the original ones, and if there is an improvement, we allow replacement. Iterate, many, many times.

    Q: Given the complexity of the relevant software, will we be likely to for instance come up with a hyperspace-capable spacecraft or some other sophisticated and un-anticipated technology? (Justify your answer on probabilistic grounds.)

    My prediction: we will have to wait longer than the universe exists to get a change that requires information generation (as opposed to information and/or functionality loss) on the scale of 500 – 1000 or more bits. [See the info-generation issue over macroevolution by RM + NS?]

    ix] Try again, this time to get to even the initial assembly program by chance, starting with random noise on the storage medium. See the abiogenesis/ origin of life issue?

    Okay, I think that is enough to spark discussion on the merits.

    GEM of TKI

  38. 38
    Timothy V Reeves says:

    Thanks Kairosfocus for honoring me with a long reply. I realize that there are few things more frustrating than believing you have achieved, through quite strenuous and time consuming effort, some useful conclusions, and then someone coming along and pronouncing as if you have never spoken. As you can see from 33 and 35 above DLH has given me some interesting reading (and some re-reading), which I am currently going through so I’ll bundle your stuff along with the stuff DLH has given me. This should keep my nose down for a bit, before I start pronouncing again.

    As for me, I’m just an amateur science dabbler, with no reputation to think about, or face saving to be done or lecture circuit ‘customer base’ to satisfy. I like to travel light. Sorry you haven’t got in me anyone of anyone status to look at your work. But consider it I will. However, as you have shrewdly observed I can also be a pretty nasty piece of work when I want to: issue evasions, sneaky ad hominem, a master of innuendo. You’ve seen through me pretty quickly haven’t you? Unless you want me to just run away with my tail between my legs then you had better watch your back! Alternatively I might just decide to stick around like a bad smell, in which case the UD experience will become a little more eclectic for you.

  39. 39
    kairosfocus says:

    TVR:

    You raised a challenge on the merits [rhetorical stratagems aside], and I have responded on the merits. Kindly, therefore, address the challenge on the merits.

    Your move . . .

    GEM of TKI

  40. 40
    Frost122585 says:

    TRV says,

    “Crystals crystallize because the physical regime providentially provides for a relatively simple morphospace containing ‘contours of stability’, ‘ratchets of formation’, or ‘Dawkins slopes’ or whatever you want to call them. In this process there are no imports of explicit information across the boundary, but there is an export of randomized kinetic energy (that is, heat) maintaining a positive overall value in the entropy increment.”

    No that is incorrect. There is import of explicit information all the way through. The various forces that are at work contain within themselves a given probabilistic event calculation. That is to say that you actually have an increase in information all the way through the process because as the specified complexity increases the probability of the event occurring from a random field of particles decreases. It is in this sense that information is directly necessitated among various events such as crystal formation.

    One of the tricks that is employed by Darwinism is that it gives no mention or credit to the laws themselves. That is to say that everything is just an unguided natural process to Darwinists “except” those things which are governed by natural physical law. The point that you need to take into consideration is that you don’t have to break the second law for random evolution to be discredited. In fact the comprehensible laws of the universe such as the second law are exactly what you would expect to find in a universe that is intelligently designed.

    A similar analysis may carried out on the heat pouring in from the Sun to the Earth. The physics of the Earth system uses the low entropy of the sun/outer space temperature gradient to produce organized forms of energy, namely kinetic energy (winds) and potential energy (clouds). So pumping in heat into something can produce order at least at the organizational low end.”

    What you have done here is appealed to the ever despised “infinite regress argument” simply moving the energy and information “backwards” in its time line to get around the claim of entropy. We can go all the way back to a first cause if you like but within the fist cause, either in the laws which govern matter or within the matter itself, is assembly instructions that account for all of the laws that we discover in action throughout the universe. The probabilistic scenarios immediately reject natural materialistic chance (such as dice throwing) and them we are left with the question of what source can CSI be arranged from? Intelligence is the best and only workable inference of which I am aware.

  41. 41
    Timothy V Reeves says:

    Frost 122585: I largely agree with what you have said there. As a theist I would certainly want to make mention of physical laws (and contingent complexity) and give credit to them as a medium of Divine providence. In fact we needn’t even go back to the ‘first cause’ to find a mystery: I know of no logical reason why the universe should continue from moment to moment and therefore I take this as evidence of the power of Aseity sustaining a contingent cosmos everywhere and everywhen. “In Him we live and move and have our being….” Hence I’m inclined to agree with your view that comprehensible laws are a sign of providence.
    .
    Let me clarify my aims. I’m trying to establish at what level the information is ‘coming in’, so to speak, in order to create life. Is the creation of life a general dispensation consistent with Divinely designed and sustained processes, or is it a product of several special dispensatory creative acts spread over the long history of Earth? I currently favor the former view, although I acknowledge that the ID notion of IC is a robust challenge to evolutionary theory and requires some serious consideration.
    .
    I am indeed taking into consideration “…that you don’t have to break the second law for random evolution to be discredited”, because in this very comment thread the efficacy of the second law as an evolutionary roadblock is at issue. If it doesn’t roadblock evolution then there is still IC to consider. However if I am reading Kairosfocus correctly, then I understand that he believes he has some work showing the second law to be an evolution stopper, so I had better give his work some time. Have your read Kairosfocus work? Do you have an opinion on it?

  42. 42
    Frost122585 says:

    “Let me clarify my aims. I’m trying to establish at what level the information is ‘coming in’, so to speak, in order to create life. Is the creation of life a general dispensation consistent with Divinely designed and sustained processes, or is it a product of several special dispensatory creative acts spread over the long history of Earth? I currently favor the former view, although I acknowledge that the ID notion of IC is a robust challenge to evolutionary theory and requires some serious consideration.”

    Yes lets indeed talk about the structure of the universe and the nature of physical matter/energy. According to Einstein’s General Relativity time and space as one is curved in situations dependent upon the nature of local gravitational fields. During the big bang the assembly instructions were built into the first cause which transcended matter. After the universe began to explode and take shape matter (and it is still happening today) started to assemble based upon those front loaded instructions. Now, those front loaded instructions are not simply existent only in the beginning- they are acting and developing al of the time. This is the nature of time. So the improbable organization of matter that we see in the universe tells us that the laws of nature can be suspended (time and space) as well as the second law- when CSI arises. The matter does not come from anywhere new- it is ever present it simply moves around like an engineer moves nuts and bolts (invisible hand) to build a motor. The universe is al part of one act- or as Shakespeare said

    “”All the world may indeed be a stage.”

    There is this idea that is implanted into our minds as kids that physical laws or laws of any sort cannot be broken. This is obviously not true if you think about human law (ie OJ Simpson). The designer’s laws are only inferred based upon physical empirical analysis. Just because we had not seen a block hole in 1900 didn’t mean that they didn’t exist. The laws of physics are merely human designed rules that are either rarely broken or where we have yet to discover a case where they are broken. Interestingly the second law must have been broken for specified complex live to arise, and, paradoxically, if the second law holds true- then unintelligent evolution could not have taken place. The only way in which unintelligent processes could have produced CSI is if we discover some matrix of evidence that the universe is completely random and large enough to produce the probabilistic resources necessary for live to arise.

    “I am indeed taking into consideration “…that you don’t have to break the second law for random evolution to be discredited”, because in this very comment thread the efficacy of the second law as an evolutionary roadblock is at issue. If it doesn’t roadblock evolution then there is still IC to consider.”

    Lets make sure that we are on the same plane of thought here. The second law prevents random evolution not evolution per se. One of the problems people often have in this debate is the concept of the formal interpretation of change over time or common descent called “evolution.” The other is the interpretation of evolution to signify a Godless, design-less universe. IC does in fact present an even greater formal physical structural model for “random” or blind evolution to get around.

    Now the second law shows that it is in fact highly improbable (almost physically impossible) that SC could arise in a physical system without “assembly instructions.” One again this is a critique of random evolution or “materialistic”
    evolution because we are not saying that “information” that is complex and purposive is required to assemble complex specified life. I am not great in math and physics so to understand and interpret the second law exactly, you’ll have to ask Kariosfocus.

    The bottom line here is exactly what Wittgenstein said on pp. 149 of the Tractatus…

    “The solution of the riddle of life in space and time lies OUTSIDE space and time.”

    Here he means logically not physically because he is talking about an existence that is not physical. Perhaps intelligence fit’s the bill.

  43. 43
    kairosfocus says:

    TVR:

    I continue to await your response regarding the stat mech considerations on OOL and related origin of functionally specified complex information [FSCI] on the merits, and note that Frosty has pointed you my way.

    Cheerio

    GEM of TKI

  44. 44
    Timothy V Reeves says:

    Now I’m sure you’ll understand Kfoc that if I am to do your work justice – and I’m sure you want me to do your work justice – it’s going to need time commensurate with your valiant efforts. But let me be bluntly frank Kfoc, yours is not the only work I trying to do justice to. The points you have attempted to make are just part of a more extensive investigation of several sources all of which justly vie for my attention. I crave your patience in accepting that your points must take their rightful place in this investigation and not be at the head driving the investigation alone; their value should be neither inflated nor underestimated. At first look however your own work does seem to contain some small problems, but I’ll let you know how that pans out in due course. I realize that you are anxious to get feedback on your work, but please be patient.
    .
    Let me repeat my terms of reference here in question form: Is second law strong enough a constraint to eliminate conventional evolution? Can we work back from the second law to probabilities? Does the second law lose too much information in its derivation? That is, is the second law too blunt to eliminate conventional evolution? There is some doubt here it seems, because as DLH concedes at 32 above: Your bluntness objection is worth exploring. At least on trying to explain systems….. So the challenge of developing new formulations that clearly distinguish between CSI and physical “order” etc. It’s on my To Do list.. So Kfoc, if you think you have sorted this one out, go and tell DLH and then he can cross an item off his To Do list.

    .
    Just to make sure I’ve understood you correctly FrostNNN here in my own terms is a digest of what I understand to be your salient points:
    .
    1. You do not believe that a general dispensation combining known laws and the contingency (Viz random configurations in space and time) is sufficient to explain the creation of life. (What about contingency and laws yet to be discovered?)
    2. You believe that the creation of life involves a series of special dispensations of creation over the history of the universe. (‘Front loading’, around t=0, if it could be proved, would come under 1)
    3. You believe that the second law contradicts a putative concept of evolution that purports to be consistent with known laws and contingency.
    4. You distinguish between an evolution driven by known laws and contingency, and an evolution driven by special dispensations of information.
    5. You accept conventional paleontological history but don’t accept that the engine driving this history is to be found only from known laws and contingency.
    .
    I hope that doesn’t misrepresent your position, but that’s how I understand it at present; don’t hesitate to put me right if I’ve misrepresented anything. At the moment I’m test-driving the general dispensation theory. However, to put this theory through its paces it needs to either pass or fail at point 3. This seems an easier investigation to carry out than put it through IC test, because the second law looks to be more analytically amenable than IC. Clearly, if point 3 can be proved I would have to review the general dispensation model.

  45. 45
    Timothy V Reeves says:

    Just a reminder: Yes, I am still about. This subject remains a hot topic with me, and so I will be posting with updates from time to time. I trust that this page will remain available for comments? If it doesn’t, not to worry because I’ll post elsewhere. I have stored and book marked this page.
    .
    FrostNNNN’s recommendation of Kfoc’s work would have been more compelling if I felt that what he was recommending what was intelligible to him – any one else care to put in a good word – preferably those with a physics/maths background?
    .
    Once again my terms of reference: The hot issue here isn’t whether evolution is true or not but whether the second law as currently stated contradicts evolution. If the latter is true it could be a short cut to ID. So the stakes are high: Such is my foreground agenda. My background agenda is to answer this question: As regards the creation of life, how does the divine will cut it between general dispensation and special dispensation? As I have already said I currently favor the general dispensation model and consequently I am road testing it.
    .
    BTW: Kfoc seems rather over sensitive about ad hominem attacks. Perusing the above it seems that I have given him no grounds to level this accusation at myself. Let me reassure Kfoc of my goodwill. I understand that if one’s name has been kicked around to such an extent that one must remove it from one’s web site, mild paranoia about such attacks is the price one pays. An imaginative reading of the silences and spaces is then all too easy.
    .
    Cheerio chaps. I’ll be back, as the saying goes.

  46. 46
    kairosfocus says:

    TVR

    I have been busy elsewhere at UD and of course in other parts, especially in the real world.

    I will respond on points:

    1] TVR, 44: Is second law strong enough a constraint to eliminate conventional evolution?

    As shown supra and in the always linked, the issue is not whether itr is logically possible but whether it is reasonable on the gamut of the observed cosmos for the scenario envisioned by the evolutionary materialists to have occurred. [This is similar to the point that it is strictly possible for all the oxygen molecules in the room in which you sit to all rush to one end, leaving you choking.]

    I notice you do not worry about the latter. Its probability is comparable to that of the spontaneous OOL within the gamut of our observed universe in some prebiotic soup or another. And, it is actually probably more probable than that we see the increments in biofunctional information that characrterise the origin of major body plans – a jump from mns to 100s+ mns of dna base pairs, dozens of times as just one index.

    Until the evo mat scenarios can adequately answer to these issues, they simply have not made theior case, especially when we know already that intelligent agents routinely generate FSCI requiring comaparable amounts of storage. AND, we see why there is such a low probability, on grounds basic to why there is a second law of thermodynamics: the non-functional configs are overwhelmingly more probable. [Indeed, had you read TBO’s TMLO, you would see that the energetics to form the monomers and polymers make the equilibria ridiculous for OOL.]

    That is, this is inference to best explanation relative to empirically anchored facts. There is a far more credible even obvious, empirically anchored explanation available, save for question-begging re-definitions of “science” and associated censorship and frankly unjust career-busting : intelligent action.

    2] Can we work back from the second law to probabilities?

    As shown, we can estimate relevant probabilities well enough to answer, by using the microstate and clusters principle along with the theorems of equipartition and equiprobability of microstates under relevant conditions. [Cf my nanobots-microjets roughly calculated example, which is directly related to the thermodynamics of undoing diffusion, the same diffusion that underlies much of modern solid state electronics.]

    3] Does the second law lose too much information in its derivation? That is, is the second law too blunt to eliminate conventional evolution?

    I am not deriving the 2nd law as such, save by way of illustration. I am using its underlying principles to show the problem. Cf the microjets example again, which is accessible to someone with a reasonable high school level education. Kindly, address that. (nor does your cross-reference to DLH impress me. Not when there is right in front of you the answwer.)

    4] What about contingency and laws yet to be discovered?

    Such “undiscovered laws” amount to a promissory note – after 150 years of trying. On inference to best explanation relative tot he laws we do know, there is already a “law” that with necessity + chance, can easily enough account for OOL etc: intelligent action.

    Besides, future laws will in general be compatible with the current ones; this is necessarily so, as these will have to cover the cases covered by present ones, then extend to the cases that the present ones do not cover. [Think Newtonian dynamics and quantum and relativity here.]

    Thirdly, if “life” is written into the basic laws of the cosmos, that looks a lot like the ultimate form of cosmological front-loading. That simply brings you up to the level of cosmological fine-tuning, for which there is already a strong case for intelligent agency as creator of the observed cosmos. Indeed, you go on to acknowledge just that: ‘Front loading’, around t=0, if it could be proved, would come under 1

    5] the second law contradicts a putative concept of evolution that purports to be consistent with known laws and contingency

    The issue is not logical consistency, but what outcomes rise above effectively zero probability on the gamut of the cosmos.

    To see the force of this, observe how you routinely accept that the posts that appear in this blog are messages tracing to intelligent agents. But in fact, it is logically and physically possible – cf my always linked section A – that such is just lucky noise, with quite similar probabilities to generating functional information in a dna strand of length yielding comparable information storage capacity. The inconsistency in your judgements is what is telling: selective hyperskepticism.

    6] The hot issue here isn’t whether evolution is true or not but whether the second law as currently stated contradicts evolution.

    Again, the issue is not bare logical possibility but sufficient probability to rise above an effective zero likelihood of occurrence on the gamut of the observed cosmos. Just as, in hypothesis testing, one has a chance of wrongly rejecting chance explanations, but confidently infers to agency on sufficiently small probability that chance is the correct explanation. The Dembski-type UPB, odds beyond 1 in 10^150 [which would in all reasonable cases mean configs isolated to better than on in 10^150 – 10^300, the latter taking in islands of functionality of up to 10^150 states easily] gives the lowest odds of incorrectly rejecting chance of any such criterion I have ever seen.

    7] Dispensations and divine wills . . .

    Irrelevant to the inference across causal factors, chance, necessity agency, to contingent so not dominated by necessity. FSCI so not dominated by chance. Thus, agency; on best current, empirically anchored explanation.

    Thereafter, one may ask about agent identification, which normally proceeds on other contextual factors. For instance, OOL and OO body plan level biodiversity do not currently implicate any extra cosmic agent. However, once we look to cosmogenesis and the organised complexity and contingency of the cosmos as we see it our alternatives are in effect that the necessary being implicated by the existence of such an observed cosmos has two effective candidates: [1] quasi-infinite unobserved [perhaps unobservable] array of sub cosmi with randomly scattered physical parameters, or [2] an agent of sufficient power and intelligence to create the cosmos as we observe it.

    This is now metaphysics – worldview analysis – not science, albeit such analysis also influences science. On other facts and issues over coherence and explanatory power and elegance [Cf Ac 17 etc], I happen to infer that 2 is the better option.

    That is, the God I have known from childhood is credibly not an illusion.

    GEM of TKI

    PS: On ad hominems, kindly recognise that the well-poisoning, dismissive attack to the man is the standard resort of the darwinistas, and studiously avoid personalities.

  47. 47
    Timothy V Reeves says:

    Thanks very much Kfocus. This is just to acknowledge your reply and confirm that I am still in circulation. I have got as far as reading your appendices and have been contemplating them. I have a rather busy Easter weekend ahead what with family and an ailing mother, but I’ll be with you ASAP. Have a good Easter!

  48. 48
    kairosfocus says:

    Hi Tim

    Thanks for the Easter wishes. Same — belated — to you.

    I’ve been busy elsewhere at UD and offline — including some interesting developments on a sustainable energy project.

    Just let me know when you are ready to comment.

    GEM of TKI

  49. 49
    Timothy V Reeves says:

    Hi Kairosfocus
    Here are my impressions as a result of a first pass of your work and its links. I say ‘first pass’ because this is really work in progress on an open ended/unbounded subject. The links are many and so I haven’t been able to follow them all up: e.g. I haven’t given TMLO more than a general perusal, so my ideas will no doubt develop as I do more study. Anyway, as I didn’t want to delay a reply indefinitely, here is how the matter stands with me at the moment. (I hope this posts, as it is fairly long – if not I’ll e-mail it to you)
    .
    On Weltanschauung: Part 1 of 7
    Dispensations and divine wills irrelevant? – not quite, I think. It seemed clear to me that you were in need of a bit of assistance in reading my behavior, because looking at the above you seemed to be getting the wrong end of the stick (although for perfectly understandable reasons). So I thought it might be helpful for you to know where I am coming from and something of my background agenda. In any case although these background notions are not part of physical science, especially a snap out Popperian caricature of science, these background ideas constitute a theoretical attempt at making sense of perception. They are therefore part of a more general empiricism (like history), which has a strong interpretative component, and as such is more loosely tied to elementary observational protocols than the science of simple objects. Therefore, these ‘metaphysical theories’, so-called, exhibit a much greater compliance of structure and a greater capacity to absorb apparent contra evidence; this doesn’t make them unempirical – it’s just that their complex ontology makes them less amenable to checking with elementary experimental protocols.
    .
    I don’t believe there is a clear-cut distinction between science and metaphysics – one imperceptibly blends into the other. Not only that, our observations and conclusions about personalities do have a bearing in the formation of our world view; one just can’t keep out the personal component and what one thinks one knows about a person – their status, their reputation, their personal traits, their allegiances, not to mention the human propensity to identify with social groups and personalities – all feed into the evidence in a more general process than institutionalized science often pretends to allow. The latter attempts to garner observational protocols in carefully controlled circumstances, but for most of the time social texts have to stand in for direct experimental protocols.
    .
    Accordingly, let me expand a bit further on my background. After conversion to Christianity as a young adult I was a YEC for a while as I had the misfortune to be linked to a Christian culture that bound up these beliefs with faith. However, when I was mature enough to review the situation with YEC it didn’t survive the review process. Moreover, as the distinction between general and special dispensation clarified in my mind, the theory of evolution as a general dispensation model gained at least a favorable review status with me and that is where I am at now. My intellectual interests aren’t anywhere near as vested or polarized as you may think. I regard IC and ID as worthwhile input to the review process and have plenty of time for ID theorists.
    .
    Specified Information: Part 2 of 7
    When it comes to ID I am still a learner and so bear with me as I attempt to come to grips with some of its concepts. As the notion of ‘complex specified information’ is foundational in your work let me start with my problem (or my misunderstanding?) with this concept. When I looked at the definition of specified information given by Wikipedia and the researchintelligentdesign web site I found that definition contrary to my expectations. I had guessed that specified information was going to be a quantity that somehow would encapsulate the notion of the organized complexity we find in organisms. Clearly the disorder value W is not adequate as an index in this connection; it reaches a minimum and a maximum only at the ends of the disorder spectrum – from the minimum disorder of bland periodic sequences to the maximum disorder of the random sequence. As far as W is concerned the organized complexity of organisms is an unremarkable state somewhere in between the maximums of order and disorder, and it is unrecognized by a mathematical turning point in W; W just keeps increasing once simple order is left behind. As with the definition of ‘mutual information’ I was expecting complex specified information to register a maximum turning point somewhere between minimum and maximum disorder. But no – not if my understanding is correct. Dembski’s definition of the specified information of a pattern means that (keeping replication and probability resources constant) specified complexity increases as the size of the class of patterns with less than or equal Kolmogorov complexity decreases. As the order of a pattern increases the class of strings with equivalent Kolmogorov complexity gets smaller and smaller and so specified information gets larger and larger – in other words Dembski’s definition seems simply to be the inverse of W. Another thing that frustrates Dembski’s definition for me is that Kolmogorov complexity reaches a maximum for random sequences (because they are incompressible) and therefore it seems an inappropriate quantity to use if one wants to nail down the ‘mid range’ complexity of organic structures – Kolmogorov complexity roughly follows W. Perhaps there is something I’ve misunderstood here.
    .
    However let me leave that issue on one side and at least acknowledge that structures like organisms are far removed from the bland extremes of order and disorder: organic structures appear to be an anomaly in a cosmos otherwise filled with the extremes of simple order or disorder. I agree with your observation that the engines of life which create useful work from temperature gradients, whilst not violating the second law whilst they exist, nevertheless raise the question of how these hyper complex engines came into existence in the first place; And if they came into existence from non-existence the question is raised about how this change squares with the second law. This is, of course, the big issue here.
    .
    The Thought Experiment: Part 3 of 7
    In your thought experiment you consider the spontaneous creation of a nano-jumbo jet (that’s an oxymoron for you) in the context of diffusion. Generalizing the model by replacing the nanojet with the class of all complex functional artifacts we arrive at a similar point. This class is of unknown size and impossible to enumerate due to the indeterminate nature of just what constitutes a ‘complex functional artifact’. It is a very large class, but there is one thing we can be fairly sure of: in comparison with W(random), W(complex functional artifacts) is likely to be a lot, lot smaller. Hence, using your diffusion model we conclude that the probability of any complex functional artifact arising is negligible. However, when we turn to the class of highly ordered configurations such as ‘crystalline’ periodicities (or even Penrose’s aperiodic crystals) W(periodic) will be a lot smaller even than W(complex functional artifacts). Thus, using your random diffusing nanobot model, we come to the conclusion that periodicity is a much harder state to realize than a complex artifact! From the perspective of simple diffusion periodic structures look to be a much greater feat of organization than organisms! So if this is the second law worked from first principles then it seems that these first principles place a greater stricture of improbability on simple order than they do complex functional artifacts! Something is missing here. I find this behavior of the nanobot model counter intuitive, and I believe it traces back to an important omission. The nanobot model neglects mathematical constraints on the system that may reduce W to a value much lower than its apparent ‘logically permissible value’. For example, as a result of the gravitational field of a planet, an equilibrium atmosphere is not a uniform distribution of gas, but is distributed according to the Boltzmann distribution.

    .
    The nanobot model doesn’t seriously engage the constraint introduced by particle interactions. In the case of real crystallization the effect of this constraint is relatively easy to comprehend; Particle interactions set up a kind of configurational ‘ratchet’ whereby the first fragment of an ordered configuration, if stumbled upon by the diffusion process, ‘sticks’; the next installment of the configuration also sticks and so on. The result is a kind of ‘induction’ process: if n is probable then n + 1 is probable and so assuming n = 1 is probable then ‘crystallization’ will take place in a serious of ‘inductive’ stages. As you know there is, of course, no violation of the second law required by this local increase in order because when the system reaches equilibrium the local increase in order represented by the crystal is offset by the increase in the overall W afforded by waste heat. The ‘elementary’ normal forces of nature are effectively working like a natural version of Maxwell’s demon: when a diffusing particle finds its place in the periodic nexus, those forces select it.
    .
    Before going on to look at organic structures let me note that you suggest, without proof, that the nanojet is an isolated island of functionality. This is unclear to me: the class of complex functional artifacts is a class that has been neither well defined nor enumerated and it is therefore difficult to determine just how this class is arranged in configuration space. So whether the nanojet is on an isolated island of functionality and therefore displays the property of irreducible complexity (IC) is difficult to establish. This, of course, contrasts with crystals: here it is relatively easy to comprehend the ‘induction rule’ that ‘bridges’ the gaps allowing the formation of an otherwise highly improbable structure to proceed in stages; crystals are not ‘isolated’ ordered structures, but are found on the ‘inductive highroads’ of simple organization.
    .
    Organic Structures: Part 4 of 7

    The class of organic structures also suffers from the definition and enumeration problems suffered by the class of complex functional artifacts, but there is one known constraint on this class: organic structures must be self-perpetuating/self maintaining. As with crystal structures once they have been formed (by whatever means) they tend to persist, although in a much more proactive way than crystals. Also, it seems fairly intuitively compelling that W(self perpetuating organized structures) although very much bigger than W(crystals), is still very small compared to the entire space of possibilities and hence the class of organic structures also seems at first sight to be a prohibitively improbable class
    .
    Can the inductive Maxwell demon approach be used to form organic structures? According to Wiki all theoretical and experimental investigations into Maxwell’s demon suggest that any demon subject to the general dispensation of our cosmos is unable to violate the second law: if the ‘demon’ is a natural agency it creates heat both in the gathering of the information it needs and in the selection of the products it is looking for. Hence, any natural demon (as opposed to supernatural agency) creates waste products that compensate for the local reduction in entropy entailed by its ‘sorting’ work. Because human technology is now reaching a point where Maxwell’s imaginary experiment can actually be carried out, let’s extend this a bit further and imagine that human technology has advanced to the point where humans could watch a prebiotic soup and in the manner of Maxwell’s demon select and isolate any spontaneous products that moved toward abiogenesis and then submit these products to the next stage of selection and so on until an elementary organism resulted. Whether or not such a fanciful scenario could actually be achieved is not the point here: the point is that if it could happen, humans, because they are a natural agency, necessarily generate waste products in carrying out their observations and selections and this leads to an entropy increase that offsets the decrease in entropy that would be entailed by the creation of an elementary organism. The whole imaginary scenario is, of course, of no more help to the evolutionary case as Dawkins’ ‘ME THINKS….’ simulation (an experiment that assumes the answer is already there waiting in the wings to manifest itself), but what it does show is that if we can find a natural Maxwell demon (humans in this case) the second law is not violated even during the construction of fantastic complexity.
    .
    A Natural Maxwell Demon? Part 5 of 7
    So if evolution has occurred where is its natural Maxwell demon? The feature of organisms that does not come out in your work is that unlike human artifacts organisms they are very proactive in their self-maintenance and perpetuation; if they should come into existence in the right environment (by whatever means) they are self-selecting; organisms are their own Maxwell demon. What selects an organic structure is not some external demon but the nature of the structure itself. The consequence of this is that if (I acknowledge that this is the big controversial ‘if’) inductive paths exist in configuration space for the class of self-selecting structures all the way from relatively elementary forms to highly complex organisms then there is the real possibility that the Maxwell demon effect will come into play. Here the demon effectively exists in the platonic realm of the configuration space implicitly defined by the providential physical regime. The ontology of the ‘natural demon’ is similar to a ‘law of physics’ in that it has an abstract meta-existence that stands above the stuff of the cosmos as a kind of mathematical constraint. Ergo, the natural status of the demon will entail that the formation of organisms as result of any inductive paths in configuration space being followed will not violate the second law anymore than any other natural Maxwell demon. This logical consistency of the second law with natural Maxwell demons has nothing whatever to do with the remote logical possibility of a highly improbable spontaneous formation, but it is to do with a conjectural feature of the organization of configuration space; that is, are the members of the class of self-maintaining structures juxtaposed to form as inductive set? In deference to the ID community I stress that this is a conjecture: The IC thesis denies the existence of these inductive connections, in which case evolution is prohibited by the kind of analysis you have already given: but make no mistake about it – your analysis only works if IC is assumed.
    .
    Although I favor evolution that’s not to say I don’t have my doubts about it: in particular abiogenesis is very sketchy with paleontological evidence thin on the ground and speculation rife. At the low end where n=1, and where the possible structures are far fewer in number, it is difficult to even conceive a kind of evolutionary ‘boot strap’ structure with the required proactive self-maintaining properties required to survive and so evolution may founder at the first inductive step of n = 1. Also, I am fascinated with the protein folding question and the energetics of monomer and polymer formation that you mention above – something I need to study.
    .
    If you want to find an analytical proof that self perpetuating structures are so isolated as to make evolution only possible by resort to highly improbable spontaneous leaps, perhaps there is a proof along these lines: Given quantum theory it is quite likely that configuration space has a discrete structure. The set of self-perpetuating structures may be so small relative to the size of this space that it is impossible to juxtapose so few elements into a set connected even by very thin fibrils of induction; it’s as if one is trying to photograph a thin strand at low resolution: there simply aren’t enough pixels per unit area to pick up the strand.
    .
    Summary of Issues Part 6 of 7
    I am not trying to carry this off by claiming that evolution is in the bag. There is no need to tell me that evolution has its own set of problems; like just how far the slow ‘inductive’ change required by evolution is justified by the fossil record, not to mention the speculative nature of abiogenesis (which tends to raise questions over whether even n=1 is probable). It’s more a case of trying to point out some of the areas in your argument that I believe need more work. Let me list the issues I have with your work:
    .
    1. The definition of specified information is still unclear to me.
    2. Your thought experiment deals with artifacts and not proactive self-maintaining structures.
    3. No distinction is made between the organized complexity of organisms and more banal states like a melting crystal that may have similar value of W.
    4. Your thought experiment suggests that simple order is more difficult to achieve than ordered complexity.
    5. In assuming that functional structures are isolated you assume without proof that the class of functional structures has an IC layout in configuration space. This could be either a strength or weakness depending on the truth of IC. But since the class of functional artifacts is difficult to define the truth of ID is correspondingly difficult to establish, and ditto the class of self-maintaining organic structures.
    6. Your work doesn’t address the possible existence of a natural Maxwell demon.
    7. Neither the second law nor its underlying principles are violated by natural Maxwell demons.
    .
    Epilogue: Part 7 of 7
    In my early Christian days as a somewhat unwilling believer in YEC I was as motivated as you to show that evolution violated the second law. So I got out my pencil and paper and did some analytical work with random walk. These were the days before personal computing so analysis was the only option. A feature that I added to my models was that of putting a bias on the random walk in order to get an idea of the effects of particle interactions. The bias effectively skews the peak of the random walk step probability in one direction or the other and this is one way of modeling a ‘ratchet’ effect; in particular a varying bias has the effect of creating clumps of particles. Using this model I discovered a property that at the time looked a bit like the second law: if the step probability of the biased random walk had a sufficiently long asymptotic tail, the clumping was always only temporary: all clumps eventually dissolved and ‘heat death’ ensured. The appearance and disappearance of these clumps contained just a small hint of evolution, a hint that was a little alarming for a naive YEC. This was about as far as I got and in any case I eventually lost my somewhat affected conviction in YEC. Instead at a latter date (about fifteen years later in fact) I resurrected the work, and by inserting some complex numbers here and there in the diffusion theory I stumbled across my amateur flight of fancy: an excursion into Quantum Gravity. This resulted in a bit of vanity publishing: a book called ‘Gravity and Quantum Non-Linearity’ which can be viewed on Google book. As for the story of how I found my way into Quantum Gravity that can be found here. However, with all that behind me I am now back looking at evolution and creation. I understand that the ID community has put in lot of work and emotional investment into their thesis and therefore I wouldn’t presume to wade in and tell them that they have got it all wrong. Instead I try to adopt the same approach to my ideas that I took with my book – as perhaps ultimately flawed, but nevertheless as I like playing around with theoretical concepts and equations, I make sure that I enjoy the journey to the full even if the destination isn’t all that I had it hoped it would be.

  50. 50
    DLH says:

    Timothy V. Reeves at 47

    As with the definition of ‘mutual information’ I was expecting complex specified information to register a maximum turning point somewhere between minimum and maximum disorder. But no – not if my understanding is correct. Dembski’s definition of the specified information of a pattern means that (keeping replication and probability resources constant) specified complexity increases as the size of the class of patterns with less than or equal Kolmogorov complexity decreases.

    Briefly, A system can have maximum information content or maximum randomness, or some combination – at the opposite extreme from simple “order”.
    Kolgomorov complexity cannot distinguish between them.
    Maximum CSI is maximum information, not the W midway between maximum randomness and order. This is a major limitation / misunderstanding of conventional descriptions that needs to be clarified/ written up better.

  51. 51
    kairosfocus says:

    TVR:

    I see your response. I will excerpt and remark on a few points:

    1] Worldview issues

    I agree that these are fundamental and often neglected. And, there is good reason why much of science used to be called natural philosophy.

    2] CSI vs FSCI:

    Actually, you have missed a key point — I do NOT talk much about CSI but instead something that is far more directly relevant: functionally specified, complex information, FSCI.

    This is close to Trevor and Abels’ use of Functional Sequence Complexity, which they aptly discuss here; though my identification of FSCI — note, not CSI — as a relevant concept was prior to my learning of TA’s work; it was initially just an abbreviation of something I noticed. [I need to update my link to get this reachable page. I may slice out an excerpt or two from this paper for my notes, maybe even a diagram — I like the 3 -d diagram, if it proves helpful] It is also conceptually tied to (but my thinking process antedates my exposure to it) Marks and Dembski’s recent use of “active information,” which gives an increment over the capacity of random search strategies.

    I do so as focussing on the relevant kind of complex specified information — and note that from Appendix 3, the CSI concept is a development of OOL studies 1970’s – 80’s, it is not an ID concept as such — gets around a lot of unnecessary disputes and debates.

    Observe highly complex information [500 – 1000 bits or more worth of contingency] ACTUALLY functioning as information, especially in a control context [with particular reference to digital information working with algorithms] and then let’s discuss what that means, as I do in Section A the always linked.

    Once we do that, we see that there is a serious issue of the observed fact that such FSCI is ALWAYS in cases where we can directly see the causal chain, the product of agency. So we have good empirical grounds for inferring that it is a reliable sign of such agency. In addition, as my always linked appendix 1 esp the microjets case at point 6 brings out, there are good reasons related to the underpinnings of statistical thermodynamics for that observation.

    Then, address the fact that the cell, which we have good reason to believe is the foundation of biological life, is based in large part on 4-state digital strings that start at 300 – 500,000 elements and go up to 3 bn or more. These digital strings are associated with algorithm implementing machinery and processes, enzymes, RNA, ribosomes etc.

    Then, look at how the forced inference to chance + necessity as the “must-be” “scientific” explanation is based on a circular argument that EXCLUDES agency from the outset.

    Why do you think that is . . . ?

    3] Microjets:

    I very deliberately chose a simple example of a “known” cluster of workable configurations, and showed the pattern in which the number of microstates corresponding to dispersed, clumped and configured microstates falls in succession.

    That is, to move from components dispersed in a medium through the usual random foces and a clumped, configured functional entity requires reduction of entropy. I also showed that the direction of the probabilities at work is away from that entropy reduction, absent intelligent intervention.

    This served to show that TBO were quite in order to separate dSclumping and dSconfiguring, to use my terms. Thence we can define appropriate information metrics, per Brillouin, and examine the related thermodynamics.

    To extend to origin of life, we can take the simple point that the known DNA string is of a magnitude that, at eh lower end [300k base pairs] gives us ~ 10^180,000 configs — before we get to other entities in the cell required for DNA to work as a part of a physically implemented algorithm intensive set of processes.

    Let us for the sake of discussion assert 10^1,500 clusters — islands — of biofunctional configs, each with another 10^150 possible states. [You will of course see that I am using the plausible number of quantum states for our cosmos across its usual gamut on time and number of particles.]

    Examine them against 10^180,000 configs for 300k DNA elements. The over-generous estimates for the number of life configs — which estimates are vastly more than the number of living cells that are possible in our cosmos across its lifespan — would be so lost in the config space that we have no good reason to infer that a random walk based process would ever get tot he relevant configs without exhausting available probabilistic resources. And, on similar stat thermodynamics principles grounds to the reason why you do not fear that the oxygen molecules in the room in which you sit will not all rush to one end, asphyxiating you.

    And, there is STILL a lot of room for far more generous estimates without affecting the material result.

    4] order, disorder and organised complexity:

    We need to distinguish three related but distinct concepts. Following TBO in TMLO:

    Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.6 [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189.]

    . . . .

    1. [Class 1:] An ordered (periodic) and therefore specified arrangement:

    THE END THE END THE END THE END

    Example: Nylon, or a crystal . . . .

    2. [Class 2:] A complex (aperiodic) unspecified arrangement:

    AGDCBFE GBCAFED ACEDFBG

    Example: Random polymers (polypeptides).

    3. [Class 3:] A complex (aperiodic) specified arrangement:

    THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE!

    Example: DNA, protein.

    . . . .

    Yockey7 and Wickens5 develop the same distinction, that “order” is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, “organization” refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future.

    Note, the concept is not a matter of a dubious injection by Design theorists, it naturally emerged from reflection on the structure of life systems: highly contingent, but highly aperiodic and quite specified as to functionality.

    5] From the perspective of simple diffusion periodic structures look to be a much greater feat of organization than organisms!

    Not at all: crystal structures are programmed into natural regularities relating to the structure of the relevant molecules, Consider the structure of ice crystals in light of the polar H2O molecule.

    Such forces lead to natural regularities and associated periodicity, order with low information storage, not functional complexity with high — and necessarily aperiodic — code-based information storage. And, as opposed of course to the highly aperiodic strings that are random.

    6 ] you suggest, without proof, that the nanojet is an isolated island of functionality. This is unclear to me: the class of complex functional artifacts is a class that has been neither well defined nor enumerated and it is therefore difficult to determine just how this class is arranged in configuration space.

    First, “proof” is a warning word: in science we do not deal with “proofs” but with observational data and inferences to best empirically anchored, provisional explanation.

    Second, simply observe just how little it takes to perturb a complex artifact into non-functionality. The empirical data are massive on this. [And that is way before we get tot he class of designs that are capable of self-replication — conceived in the 1940’s, but not yet implemented.]

    Nor am I appealing to irreducible complexity (though in fact it is a lot easier to see IC in action than the critics, through selective hyperskepticism are willing to acknowledge, given the implications for their favoured paradigm), only to vulnerability to perturbation relative to configuration; the basis for a whole lot of maintenance praxis and debugging or troubleshooting.

    I gather for instance that there have been cases of planes that have crashed because a single fastener was put in the wrong way around in manufacture. [Reflect on just how much inspection is put into aircraft manufacture, for this very reason.]

    7] Because human technology is now reaching a point where Maxwell’s imaginary experiment can actually be carried out, let’s extend this a bit further and imagine that human technology has advanced to the point where humans could watch a prebiotic soup and in the manner of Maxwell’s demon select and isolate any spontaneous products that moved toward abiogenesis and then submit these products to the next stage of selection and so on until an elementary organism resulted . . .

    This would of course be precisely a case of intelligent design through injection of active information enabling target-based selection. Chem evo scenarios are precisely supposed to work woithout such intelligent intervention, at least per the evo mat paradigm.

    8] The feature of organisms that does not come out in your work is that unlike human artifacts organisms they are very proactive in their self-maintenance and perpetuation; if they should come into existence in the right environment (by whatever means) they are self-selecting; organisms are their own Maxwell demon

    BEEP . . . !

    The problem is that the observed self-replicating nanotechnology of the cell is precisely based on a very high degree of complexity that is at once well beyond the UPB, and it is a case of functionality acting to create other functionality in accord with code-based algorithms.

    In short, this begs the question at stake: ORIGIN of the FSCI.

    9] What selects an organic structure is not some external demon but the nature of the structure itself. The consequence of this is that if (I acknowledge that this is the big controversial ‘if’) inductive paths exist in configuration space for the class of self-selecting structures all the way from relatively elementary forms to highly complex organisms then there is the real possibility that the Maxwell demon effect will come into play.

    You here acknowledge that you are begging the question, without empirical basis in observations of such an emergence.

    That’s fine for speculation, but you must then accept that there is an easily available alternative: intelligence — which is KNOWN EMPIRICALLY to generate FSCI — is responsible for the relevbasnt FSCI.

    On the basis of what has empirics vs what has not, we know the better explanation relative to factual adequacy.

    So, kindly bring forward empirical data — not speculations, models and simulations [pencil and paper or computers makes but little difference] — or else your model founders on the first prong of comparative difficulties: factual adequacy.

    And BTW, if the underlying physics of the cosmos is so structured that there are platonic-style forms embedded in it that “naturally” unfold prebiotic soups [for the challenges of which see TMLO’s earlier chapters] into life, that speaks right to the third ID issue, the source of the organised complexity of the cosmos as a whole. [Cf my always linked Section D.]

    GEM of TKI

  52. 52
    Timothy V Reeves says:

    Thanks very much for the replies DLH and Kfocus. Our core positions are probably not so different, but working that out may lead to some divergences.

    DLH: I would be interested in any further clafifying links on CSI

    Kfocus: I’ll do my best to engage direcly those points you have raised above. (I read appendices 1 & 2) I’ll reply asap.

  53. 53
    kairosfocus says:

    TVR

    1] Here is my discussion on the FSCI concept, which is much more directly graspable.

    2] Here is my discussion on the roots of the CSI concept and what it means — it will be plain that CSI is a more general view, and FSCI is perhaps the more focussed, relevant version.

    3] ID Research wiki has a useful discussion here:

    The term Specified Complexity comes from Leslie Orgel, who employed it to describe the difference between living and non-living systems.[1]

    Specified Complexity as developed by William Dembski is a dual-pronged criterion for objectively detecting the effects of certain types of intelligent activity without first hand evidence of the cause of the event in question.[2] It consists of two important components, both of which are essential for inferring design reliably. The first component is the criterion of complexity or improbability. The second is the criterion of specificity, which is an independently given, detachable pattern. For more discussion, see Defining Specified Complexity

    In the just linked, the quantitative metric of CSI is given by:

    The definition of contextdependent specified complexity of a pattern T given a (chance) hypothesis H is given in section 7, “Specified Complexity”, p. 21 as:

    ? = –log2[M·N·?S (T)·P(T| H)].

    In contextindependent specified complexity, the product M·N is replaced by 10120.

    H is the chance-shuffled null hyp, T is the observed event, so that the probability metric is the conditional probability of T given H. M.N gives a metric of probabilistic resources: M observers, N observations per observer, so MN is the number of searches in effect.

    This brings up:

    ?S(T), the specificational resources associated by S with T. The subscript S denotes a semiotic agent, which is simply anyone/anything that can communicate using some symbolic language. An event such as our T must conform to some pattern P for S to be able to communicate its occurrence, and such a pattern can be described using a string of symbols . . . The descriptive complexity or semiotic cost ?’S(P) of a pattern P is the number of symbols used in the shortest description of P available to S. Conceptually, we can think of it as that S has a dictionary of descriptions relevant to the subject area beginning with descriptions of length one, continuing with descriptions of length two, and so on, and S goes through this dictionary until a matching description of P is found. Assuming S has found a description for P, yet continues to go through the dictionary to the last entry of the same length, the number of descriptions checked is the number of all descriptions with a length shorter or equal to the length of the shortest description of P . . . . ?S(T) = the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T.

    Drawing up the bottomline:

    What is the point in the specificational resources? Dembski’s claim is that a simple pattern, that is a pattern with a short description, is a stronger indicator for design than is a complex pattern. The ‘complexity’ in ‘specified complexity’ refers primarily to low probability of an event to occur by chance (what Dembski calls ‘statistically complex’). A pattern such as Poker Hand is as simple as Royal Flush, but, of course, any poker hand is a Poker Hand, so simplicity of the pattern is not sufficient to say that we have a case of design. A pattern such as Deuce and Five of Hearts, Nine of Spades, King of Diamonds, and Six of Spades has a very low probability to occur; but it’s nor really a pattern we are concerned about, if by ‘design’ we mean ‘cheating’, although someone might claim that it’s not every day you see exactly this poker hand. It’s the combination of a simple pattern and a low probability that should arouse our suspicion, according to Dembski.

    Why the subscript S? Because different observers may not have the same descriptions at disposition; for instance, a person unfamiliar with poker might not know, what a “Royal Flush” is, and not know that it has special significance within the game. Therefore, specified complexity is a subjective measure.

    If we look at the product ?S(T)·P(T | H), then it is an upper bound on the probability of S to observe an event that is at most as descriptive complex as T and has at most the same probability (cf. p. 18).

    In short, the whole product M·N·?S(T)·P(T | H) is an upper bound to the probability subject to H that at least one of M independent observers during one of N observations will report to the semiotic agent S at least one event that is at most as descriptive complex as T and has at most the same probability.

    Converting to binary logarithm reverses the scale and turns the product into a number of bits. If M·N·?S(T)·P(T | H) 1. That is, if ? > 1, it can be considered more reasonable to conclude design than to conclude chance.

    Dembski’s conceptual and mathematical definition makes sense to me, but I think that FSCI is more directly relevant and accessible.

    Cf my Section A, the always linked.

    GEM of TKI

  54. 54
    kairosfocus says:

    PS: Saw a “database error” message and see that Chi has not made it through the posting process, it is the ? above.

  55. 55
    Timothy V Reeves says:

    Thanks very much for taking the trouble to compile all that Kfocus. I’m studying it and will reply.

  56. 56
    kairosfocus says:

    TVR:

    Welcome.

    I have also updated appendix 3 the always linked to take in the TA chart on RSC, OSC, FSC.

    GEM of TKI

  57. 57
    Timothy V Reeves says:

    Hi Kfocus,

    Just to say that I haven’t deserted you and still have your material in my sights, although a bit on the back burner at the moment. May be a little while before I get back to you.

  58. 58
    kairosfocus says:

    TVR:

    Okay, trust things work out for you — I am about to put myself in a public hot-seat here for the next few weeks per a consultancy. [I guess that’s what they pay you for . . .]

    Just let me know when you respond. [You can actually contact me by email fairly easily through the contact-me in my always linked.]

    GEM of TKI

  59. 59
    Timothy V Reeves says:

    Hi Kfocus
    Just a quick update on progress. I have read most of your Web page on “Information and Design etc”,. I’ve read Trevor and Able’s paper on functional sequence complexity and Marks and Dembki’s paper on active information. I am currently going through the three available chapters of “The Mystery of Life’s Origin.” So I am still around, but will be a little while yet.

  60. 60
    kairosfocus says:

    Hi Tim:

    Good to hear from you on you readings. Do let me know when you respond onward. (Just remember I may miss the posting.)

    GEM of TKI

Leave a Reply