Uncommon Descent Serving The Intelligent Design Community

“No process can result in a net gain of information” underlies 2LoT

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Further to Granville Sewell‘s work on the 2nd Law in an open system, here is Duncan & Semura profound insight into how loss of information is the foundation for the 2nd Law of Thermodynamics. This appears foundational to the understanding and development and testing of origin theories and consequent change in physical and biotic systems. ———————

The key insight here is that when one attempts to derive the second law without any reference to information, a step which can be described as information loss always makes its way into the derivation by some sleight of hand. An information-losing approximation is necessary, and adds essentially new physics into the model which is outside the realm of energy dynamics.


4. Summary of the Perspective

1) Energy and information dynamics are independent but coupled (see Figure 1).
2) The second law of thermodynamic is not reducible purely to mechanics (classical or quantum); it is part of information dynamics. That is, the second law exists because there is a restriction applying to information that is outside of and additional to the laws of classical or quantum mechanics.
3) The foundational principle underlying the second law can then be expressed succinctly in terms of information loss:
“No process can result in a net gain of information.”
In other words, the uncertainty about the detailed state of a system cannot decrease over time – uncertainty increases or stays the same.

The information loss perspective provides a natural framework for incorporating extensions and apparent challenges to the second law. The principle that “no process can result in a net gain of information” appears to be deeper and more universal than standard formulations of the second law.

. . . the information-loss framework offers the possibility of discovering new mechanisms of information storage through the analysis of second law challenges, deepening our understanding both of the second law and of information dynamics.

See full paper: Information Loss as a Foundational Principle for the Second Law of Thermodynamics, T. L. Duncan, J. S. Semura
Foundations of Physics, Foundations of Physics, Volume 37, Issue 12, pp.1767-1773, DOI 10.1007/s10701-007-9159-z
This builds on Duncan & Semura’s first paper:
The Deep Physics Behind the Second Law: Information and Energy As Independent Forms of Bookkeeping, T. Duncan, J. Semura, Entropy 2004, 6, 21-29, arXiv:cond-mat/0501014v1

Comments
Hi Tim: Good to hear from you on you readings. Do let me know when you respond onward. (Just remember I may miss the posting.) GEM of TKIkairosfocus
May 2, 2008
May
05
May
2
02
2008
04:45 AM
4
04
45
AM
PST
Hi Kfocus Just a quick update on progress. I have read most of your Web page on “Information and Design etc”,. I’ve read Trevor and Able’s paper on functional sequence complexity and Marks and Dembki’s paper on active information. I am currently going through the three available chapters of “The Mystery of Life’s Origin.” So I am still around, but will be a little while yet.Timothy V Reeves
May 2, 2008
May
05
May
2
02
2008
04:12 AM
4
04
12
AM
PST
TVR: Okay, trust things work out for you -- I am about to put myself in a public hot-seat here for the next few weeks per a consultancy. [I guess that's what they pay you for . . .] Just let me know when you respond. [You can actually contact me by email fairly easily through the contact-me in my always linked.] GEM of TKIkairosfocus
April 12, 2008
April
04
Apr
12
12
2008
04:46 AM
4
04
46
AM
PST
Hi Kfocus, Just to say that I haven't deserted you and still have your material in my sights, although a bit on the back burner at the moment. May be a little while before I get back to you.Timothy V Reeves
April 12, 2008
April
04
Apr
12
12
2008
04:17 AM
4
04
17
AM
PST
TVR: Welcome. I have also updated appendix 3 the always linked to take in the TA chart on RSC, OSC, FSC. GEM of TKIkairosfocus
April 4, 2008
April
04
Apr
4
04
2008
06:09 AM
6
06
09
AM
PST
Thanks very much for taking the trouble to compile all that Kfocus. I'm studying it and will reply.Timothy V Reeves
April 4, 2008
April
04
Apr
4
04
2008
02:38 AM
2
02
38
AM
PST
PS: Saw a "database error" message and see that Chi has not made it through the posting process, it is the ? above.kairosfocus
April 3, 2008
April
04
Apr
3
03
2008
02:57 AM
2
02
57
AM
PST
TVR 1] Here is my discussion on the FSCI concept, which is much more directly graspable. 2] Here is my discussion on the roots of the CSI concept and what it means -- it will be plain that CSI is a more general view, and FSCI is perhaps the more focussed, relevant version. 3] ID Research wiki has a useful discussion here:
The term Specified Complexity comes from Leslie Orgel, who employed it to describe the difference between living and non-living systems.[1] Specified Complexity as developed by William Dembski is a dual-pronged criterion for objectively detecting the effects of certain types of intelligent activity without first hand evidence of the cause of the event in question.[2] It consists of two important components, both of which are essential for inferring design reliably. The first component is the criterion of complexity or improbability. The second is the criterion of specificity, which is an independently given, detachable pattern. For more discussion, see Defining Specified Complexity
In the just linked, the quantitative metric of CSI is given by:
The definition of contextdependent specified complexity of a pattern T given a (chance) hypothesis H is given in section 7, "Specified Complexity", p. 21 as: ? = –log2[M·N·?S (T)·P(T| H)]. In contextindependent specified complexity, the product M·N is replaced by 10120.
H is the chance-shuffled null hyp, T is the observed event, so that the probability metric is the conditional probability of T given H. M.N gives a metric of probabilistic resources: M observers, N observations per observer, so MN is the number of searches in effect. This brings up:
?S(T), the specificational resources associated by S with T. The subscript S denotes a semiotic agent, which is simply anyone/anything that can communicate using some symbolic language. An event such as our T must conform to some pattern P for S to be able to communicate its occurrence, and such a pattern can be described using a string of symbols . . . The descriptive complexity or semiotic cost ?'S(P) of a pattern P is the number of symbols used in the shortest description of P available to S. Conceptually, we can think of it as that S has a dictionary of descriptions relevant to the subject area beginning with descriptions of length one, continuing with descriptions of length two, and so on, and S goes through this dictionary until a matching description of P is found. Assuming S has found a description for P, yet continues to go through the dictionary to the last entry of the same length, the number of descriptions checked is the number of all descriptions with a length shorter or equal to the length of the shortest description of P . . . . ?S(T) = the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T.
Drawing up the bottomline:
What is the point in the specificational resources? Dembski's claim is that a simple pattern, that is a pattern with a short description, is a stronger indicator for design than is a complex pattern. The 'complexity' in 'specified complexity' refers primarily to low probability of an event to occur by chance (what Dembski calls 'statistically complex'). A pattern such as Poker Hand is as simple as Royal Flush, but, of course, any poker hand is a Poker Hand, so simplicity of the pattern is not sufficient to say that we have a case of design. A pattern such as Deuce and Five of Hearts, Nine of Spades, King of Diamonds, and Six of Spades has a very low probability to occur; but it's nor really a pattern we are concerned about, if by 'design' we mean 'cheating', although someone might claim that it's not every day you see exactly this poker hand. It's the combination of a simple pattern and a low probability that should arouse our suspicion, according to Dembski. Why the subscript S? Because different observers may not have the same descriptions at disposition; for instance, a person unfamiliar with poker might not know, what a "Royal Flush" is, and not know that it has special significance within the game. Therefore, specified complexity is a subjective measure. If we look at the product ?S(T)·P(T | H), then it is an upper bound on the probability of S to observe an event that is at most as descriptive complex as T and has at most the same probability (cf. p. 18). In short, the whole product M·N·?S(T)·P(T | H) is an upper bound to the probability subject to H that at least one of M independent observers during one of N observations will report to the semiotic agent S at least one event that is at most as descriptive complex as T and has at most the same probability. Converting to binary logarithm reverses the scale and turns the product into a number of bits. If M·N·?S(T)·P(T | H) 1. That is, if ? > 1, it can be considered more reasonable to conclude design than to conclude chance.
Dembski's conceptual and mathematical definition makes sense to me, but I think that FSCI is more directly relevant and accessible. Cf my Section A, the always linked. GEM of TKIkairosfocus
April 3, 2008
April
04
Apr
3
03
2008
02:54 AM
2
02
54
AM
PST
Thanks very much for the replies DLH and Kfocus. Our core positions are probably not so different, but working that out may lead to some divergences. DLH: I would be interested in any further clafifying links on CSI Kfocus: I'll do my best to engage direcly those points you have raised above. (I read appendices 1 & 2) I'll reply asap.Timothy V Reeves
April 2, 2008
April
04
Apr
2
02
2008
03:59 AM
3
03
59
AM
PST
TVR: I see your response. I will excerpt and remark on a few points: 1] Worldview issues I agree that these are fundamental and often neglected. And, there is good reason why much of science used to be called natural philosophy. 2] CSI vs FSCI: Actually, you have missed a key point -- I do NOT talk much about CSI but instead something that is far more directly relevant: functionally specified, complex information, FSCI. This is close to Trevor and Abels' use of Functional Sequence Complexity, which they aptly discuss here; though my identification of FSCI -- note, not CSI -- as a relevant concept was prior to my learning of TA's work; it was initially just an abbreviation of something I noticed. [I need to update my link to get this reachable page. I may slice out an excerpt or two from this paper for my notes, maybe even a diagram -- I like the 3 -d diagram, if it proves helpful] It is also conceptually tied to (but my thinking process antedates my exposure to it) Marks and Dembski's recent use of "active information," which gives an increment over the capacity of random search strategies. I do so as focussing on the relevant kind of complex specified information -- and note that from Appendix 3, the CSI concept is a development of OOL studies 1970's - 80's, it is not an ID concept as such -- gets around a lot of unnecessary disputes and debates. Observe highly complex information [500 - 1000 bits or more worth of contingency] ACTUALLY functioning as information, especially in a control context [with particular reference to digital information working with algorithms] and then let's discuss what that means, as I do in Section A the always linked. Once we do that, we see that there is a serious issue of the observed fact that such FSCI is ALWAYS in cases where we can directly see the causal chain, the product of agency. So we have good empirical grounds for inferring that it is a reliable sign of such agency. In addition, as my always linked appendix 1 esp the microjets case at point 6 brings out, there are good reasons related to the underpinnings of statistical thermodynamics for that observation. Then, address the fact that the cell, which we have good reason to believe is the foundation of biological life, is based in large part on 4-state digital strings that start at 300 - 500,000 elements and go up to 3 bn or more. These digital strings are associated with algorithm implementing machinery and processes, enzymes, RNA, ribosomes etc. Then, look at how the forced inference to chance + necessity as the "must-be" "scientific" explanation is based on a circular argument that EXCLUDES agency from the outset. Why do you think that is . . . ? 3] Microjets: I very deliberately chose a simple example of a "known" cluster of workable configurations, and showed the pattern in which the number of microstates corresponding to dispersed, clumped and configured microstates falls in succession. That is, to move from components dispersed in a medium through the usual random foces and a clumped, configured functional entity requires reduction of entropy. I also showed that the direction of the probabilities at work is away from that entropy reduction, absent intelligent intervention. This served to show that TBO were quite in order to separate dSclumping and dSconfiguring, to use my terms. Thence we can define appropriate information metrics, per Brillouin, and examine the related thermodynamics. To extend to origin of life, we can take the simple point that the known DNA string is of a magnitude that, at eh lower end [300k base pairs] gives us ~ 10^180,000 configs -- before we get to other entities in the cell required for DNA to work as a part of a physically implemented algorithm intensive set of processes. Let us for the sake of discussion assert 10^1,500 clusters -- islands -- of biofunctional configs, each with another 10^150 possible states. [You will of course see that I am using the plausible number of quantum states for our cosmos across its usual gamut on time and number of particles.] Examine them against 10^180,000 configs for 300k DNA elements. The over-generous estimates for the number of life configs -- which estimates are vastly more than the number of living cells that are possible in our cosmos across its lifespan -- would be so lost in the config space that we have no good reason to infer that a random walk based process would ever get tot he relevant configs without exhausting available probabilistic resources. And, on similar stat thermodynamics principles grounds to the reason why you do not fear that the oxygen molecules in the room in which you sit will not all rush to one end, asphyxiating you. And, there is STILL a lot of room for far more generous estimates without affecting the material result. 4] order, disorder and organised complexity: We need to distinguish three related but distinct concepts. Following TBO in TMLO:
Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.6 [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189.] . . . . 1. [Class 1:] An ordered (periodic) and therefore specified arrangement: THE END THE END THE END THE END Example: Nylon, or a crystal . . . . 2. [Class 2:] A complex (aperiodic) unspecified arrangement: AGDCBFE GBCAFED ACEDFBG Example: Random polymers (polypeptides). 3. [Class 3:] A complex (aperiodic) specified arrangement: THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE! Example: DNA, protein. . . . . Yockey7 and Wickens5 develop the same distinction, that "order" is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, "organization" refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future.
Note, the concept is not a matter of a dubious injection by Design theorists, it naturally emerged from reflection on the structure of life systems: highly contingent, but highly aperiodic and quite specified as to functionality. 5] From the perspective of simple diffusion periodic structures look to be a much greater feat of organization than organisms! Not at all: crystal structures are programmed into natural regularities relating to the structure of the relevant molecules, Consider the structure of ice crystals in light of the polar H2O molecule. Such forces lead to natural regularities and associated periodicity, order with low information storage, not functional complexity with high -- and necessarily aperiodic -- code-based information storage. And, as opposed of course to the highly aperiodic strings that are random. 6 ] you suggest, without proof, that the nanojet is an isolated island of functionality. This is unclear to me: the class of complex functional artifacts is a class that has been neither well defined nor enumerated and it is therefore difficult to determine just how this class is arranged in configuration space. First, "proof" is a warning word: in science we do not deal with "proofs" but with observational data and inferences to best empirically anchored, provisional explanation. Second, simply observe just how little it takes to perturb a complex artifact into non-functionality. The empirical data are massive on this. [And that is way before we get tot he class of designs that are capable of self-replication -- conceived in the 1940's, but not yet implemented.] Nor am I appealing to irreducible complexity (though in fact it is a lot easier to see IC in action than the critics, through selective hyperskepticism are willing to acknowledge, given the implications for their favoured paradigm), only to vulnerability to perturbation relative to configuration; the basis for a whole lot of maintenance praxis and debugging or troubleshooting. I gather for instance that there have been cases of planes that have crashed because a single fastener was put in the wrong way around in manufacture. [Reflect on just how much inspection is put into aircraft manufacture, for this very reason.] 7] Because human technology is now reaching a point where Maxwell’s imaginary experiment can actually be carried out, let’s extend this a bit further and imagine that human technology has advanced to the point where humans could watch a prebiotic soup and in the manner of Maxwell’s demon select and isolate any spontaneous products that moved toward abiogenesis and then submit these products to the next stage of selection and so on until an elementary organism resulted . . . This would of course be precisely a case of intelligent design through injection of active information enabling target-based selection. Chem evo scenarios are precisely supposed to work woithout such intelligent intervention, at least per the evo mat paradigm. 8] The feature of organisms that does not come out in your work is that unlike human artifacts organisms they are very proactive in their self-maintenance and perpetuation; if they should come into existence in the right environment (by whatever means) they are self-selecting; organisms are their own Maxwell demon BEEP . . . ! The problem is that the observed self-replicating nanotechnology of the cell is precisely based on a very high degree of complexity that is at once well beyond the UPB, and it is a case of functionality acting to create other functionality in accord with code-based algorithms. In short, this begs the question at stake: ORIGIN of the FSCI. 9] What selects an organic structure is not some external demon but the nature of the structure itself. The consequence of this is that if (I acknowledge that this is the big controversial ‘if’) inductive paths exist in configuration space for the class of self-selecting structures all the way from relatively elementary forms to highly complex organisms then there is the real possibility that the Maxwell demon effect will come into play. You here acknowledge that you are begging the question, without empirical basis in observations of such an emergence. That's fine for speculation, but you must then accept that there is an easily available alternative: intelligence -- which is KNOWN EMPIRICALLY to generate FSCI -- is responsible for the relevbasnt FSCI. On the basis of what has empirics vs what has not, we know the better explanation relative to factual adequacy. So, kindly bring forward empirical data -- not speculations, models and simulations [pencil and paper or computers makes but little difference] -- or else your model founders on the first prong of comparative difficulties: factual adequacy. And BTW, if the underlying physics of the cosmos is so structured that there are platonic-style forms embedded in it that "naturally" unfold prebiotic soups [for the challenges of which see TMLO's earlier chapters] into life, that speaks right to the third ID issue, the source of the organised complexity of the cosmos as a whole. [Cf my always linked Section D.] GEM of TKIkairosfocus
April 1, 2008
April
04
Apr
1
01
2008
06:14 AM
6
06
14
AM
PST
Timothy V. Reeves at 47
As with the definition of ‘mutual information’ I was expecting complex specified information to register a maximum turning point somewhere between minimum and maximum disorder. But no – not if my understanding is correct. Dembski’s definition of the specified information of a pattern means that (keeping replication and probability resources constant) specified complexity increases as the size of the class of patterns with less than or equal Kolmogorov complexity decreases.
Briefly, A system can have maximum information content or maximum randomness, or some combination - at the opposite extreme from simple "order". Kolgomorov complexity cannot distinguish between them. Maximum CSI is maximum information, not the W midway between maximum randomness and order. This is a major limitation / misunderstanding of conventional descriptions that needs to be clarified/ written up better.DLH
March 31, 2008
March
03
Mar
31
31
2008
12:39 PM
12
12
39
PM
PST
Hi Kairosfocus Here are my impressions as a result of a first pass of your work and its links. I say ‘first pass’ because this is really work in progress on an open ended/unbounded subject. The links are many and so I haven’t been able to follow them all up: e.g. I haven’t given TMLO more than a general perusal, so my ideas will no doubt develop as I do more study. Anyway, as I didn’t want to delay a reply indefinitely, here is how the matter stands with me at the moment. (I hope this posts, as it is fairly long – if not I’ll e-mail it to you) . On Weltanschauung: Part 1 of 7 Dispensations and divine wills irrelevant? – not quite, I think. It seemed clear to me that you were in need of a bit of assistance in reading my behavior, because looking at the above you seemed to be getting the wrong end of the stick (although for perfectly understandable reasons). So I thought it might be helpful for you to know where I am coming from and something of my background agenda. In any case although these background notions are not part of physical science, especially a snap out Popperian caricature of science, these background ideas constitute a theoretical attempt at making sense of perception. They are therefore part of a more general empiricism (like history), which has a strong interpretative component, and as such is more loosely tied to elementary observational protocols than the science of simple objects. Therefore, these ‘metaphysical theories’, so-called, exhibit a much greater compliance of structure and a greater capacity to absorb apparent contra evidence; this doesn’t make them unempirical – it’s just that their complex ontology makes them less amenable to checking with elementary experimental protocols. . I don’t believe there is a clear-cut distinction between science and metaphysics – one imperceptibly blends into the other. Not only that, our observations and conclusions about personalities do have a bearing in the formation of our world view; one just can’t keep out the personal component and what one thinks one knows about a person - their status, their reputation, their personal traits, their allegiances, not to mention the human propensity to identify with social groups and personalities - all feed into the evidence in a more general process than institutionalized science often pretends to allow. The latter attempts to garner observational protocols in carefully controlled circumstances, but for most of the time social texts have to stand in for direct experimental protocols. . Accordingly, let me expand a bit further on my background. After conversion to Christianity as a young adult I was a YEC for a while as I had the misfortune to be linked to a Christian culture that bound up these beliefs with faith. However, when I was mature enough to review the situation with YEC it didn’t survive the review process. Moreover, as the distinction between general and special dispensation clarified in my mind, the theory of evolution as a general dispensation model gained at least a favorable review status with me and that is where I am at now. My intellectual interests aren’t anywhere near as vested or polarized as you may think. I regard IC and ID as worthwhile input to the review process and have plenty of time for ID theorists. . Specified Information: Part 2 of 7 When it comes to ID I am still a learner and so bear with me as I attempt to come to grips with some of its concepts. As the notion of ‘complex specified information’ is foundational in your work let me start with my problem (or my misunderstanding?) with this concept. When I looked at the definition of specified information given by Wikipedia and the researchintelligentdesign web site I found that definition contrary to my expectations. I had guessed that specified information was going to be a quantity that somehow would encapsulate the notion of the organized complexity we find in organisms. Clearly the disorder value W is not adequate as an index in this connection; it reaches a minimum and a maximum only at the ends of the disorder spectrum - from the minimum disorder of bland periodic sequences to the maximum disorder of the random sequence. As far as W is concerned the organized complexity of organisms is an unremarkable state somewhere in between the maximums of order and disorder, and it is unrecognized by a mathematical turning point in W; W just keeps increasing once simple order is left behind. As with the definition of ‘mutual information’ I was expecting complex specified information to register a maximum turning point somewhere between minimum and maximum disorder. But no – not if my understanding is correct. Dembski’s definition of the specified information of a pattern means that (keeping replication and probability resources constant) specified complexity increases as the size of the class of patterns with less than or equal Kolmogorov complexity decreases. As the order of a pattern increases the class of strings with equivalent Kolmogorov complexity gets smaller and smaller and so specified information gets larger and larger – in other words Dembski’s definition seems simply to be the inverse of W. Another thing that frustrates Dembski’s definition for me is that Kolmogorov complexity reaches a maximum for random sequences (because they are incompressible) and therefore it seems an inappropriate quantity to use if one wants to nail down the ‘mid range’ complexity of organic structures – Kolmogorov complexity roughly follows W. Perhaps there is something I’ve misunderstood here. . However let me leave that issue on one side and at least acknowledge that structures like organisms are far removed from the bland extremes of order and disorder: organic structures appear to be an anomaly in a cosmos otherwise filled with the extremes of simple order or disorder. I agree with your observation that the engines of life which create useful work from temperature gradients, whilst not violating the second law whilst they exist, nevertheless raise the question of how these hyper complex engines came into existence in the first place; And if they came into existence from non-existence the question is raised about how this change squares with the second law. This is, of course, the big issue here. . The Thought Experiment: Part 3 of 7 In your thought experiment you consider the spontaneous creation of a nano-jumbo jet (that’s an oxymoron for you) in the context of diffusion. Generalizing the model by replacing the nanojet with the class of all complex functional artifacts we arrive at a similar point. This class is of unknown size and impossible to enumerate due to the indeterminate nature of just what constitutes a ‘complex functional artifact’. It is a very large class, but there is one thing we can be fairly sure of: in comparison with W(random), W(complex functional artifacts) is likely to be a lot, lot smaller. Hence, using your diffusion model we conclude that the probability of any complex functional artifact arising is negligible. However, when we turn to the class of highly ordered configurations such as ‘crystalline’ periodicities (or even Penrose’s aperiodic crystals) W(periodic) will be a lot smaller even than W(complex functional artifacts). Thus, using your random diffusing nanobot model, we come to the conclusion that periodicity is a much harder state to realize than a complex artifact! From the perspective of simple diffusion periodic structures look to be a much greater feat of organization than organisms! So if this is the second law worked from first principles then it seems that these first principles place a greater stricture of improbability on simple order than they do complex functional artifacts! Something is missing here. I find this behavior of the nanobot model counter intuitive, and I believe it traces back to an important omission. The nanobot model neglects mathematical constraints on the system that may reduce W to a value much lower than its apparent ‘logically permissible value’. For example, as a result of the gravitational field of a planet, an equilibrium atmosphere is not a uniform distribution of gas, but is distributed according to the Boltzmann distribution. . The nanobot model doesn’t seriously engage the constraint introduced by particle interactions. In the case of real crystallization the effect of this constraint is relatively easy to comprehend; Particle interactions set up a kind of configurational ‘ratchet’ whereby the first fragment of an ordered configuration, if stumbled upon by the diffusion process, ‘sticks’; the next installment of the configuration also sticks and so on. The result is a kind of ‘induction’ process: if n is probable then n + 1 is probable and so assuming n = 1 is probable then ‘crystallization’ will take place in a serious of ‘inductive’ stages. As you know there is, of course, no violation of the second law required by this local increase in order because when the system reaches equilibrium the local increase in order represented by the crystal is offset by the increase in the overall W afforded by waste heat. The ‘elementary’ normal forces of nature are effectively working like a natural version of Maxwell’s demon: when a diffusing particle finds its place in the periodic nexus, those forces select it. . Before going on to look at organic structures let me note that you suggest, without proof, that the nanojet is an isolated island of functionality. This is unclear to me: the class of complex functional artifacts is a class that has been neither well defined nor enumerated and it is therefore difficult to determine just how this class is arranged in configuration space. So whether the nanojet is on an isolated island of functionality and therefore displays the property of irreducible complexity (IC) is difficult to establish. This, of course, contrasts with crystals: here it is relatively easy to comprehend the ‘induction rule’ that ‘bridges’ the gaps allowing the formation of an otherwise highly improbable structure to proceed in stages; crystals are not ‘isolated’ ordered structures, but are found on the ‘inductive highroads’ of simple organization. . Organic Structures: Part 4 of 7 The class of organic structures also suffers from the definition and enumeration problems suffered by the class of complex functional artifacts, but there is one known constraint on this class: organic structures must be self-perpetuating/self maintaining. As with crystal structures once they have been formed (by whatever means) they tend to persist, although in a much more proactive way than crystals. Also, it seems fairly intuitively compelling that W(self perpetuating organized structures) although very much bigger than W(crystals), is still very small compared to the entire space of possibilities and hence the class of organic structures also seems at first sight to be a prohibitively improbable class . Can the inductive Maxwell demon approach be used to form organic structures? According to Wiki all theoretical and experimental investigations into Maxwell’s demon suggest that any demon subject to the general dispensation of our cosmos is unable to violate the second law: if the ‘demon’ is a natural agency it creates heat both in the gathering of the information it needs and in the selection of the products it is looking for. Hence, any natural demon (as opposed to supernatural agency) creates waste products that compensate for the local reduction in entropy entailed by its ‘sorting’ work. Because human technology is now reaching a point where Maxwell’s imaginary experiment can actually be carried out, let’s extend this a bit further and imagine that human technology has advanced to the point where humans could watch a prebiotic soup and in the manner of Maxwell’s demon select and isolate any spontaneous products that moved toward abiogenesis and then submit these products to the next stage of selection and so on until an elementary organism resulted. Whether or not such a fanciful scenario could actually be achieved is not the point here: the point is that if it could happen, humans, because they are a natural agency, necessarily generate waste products in carrying out their observations and selections and this leads to an entropy increase that offsets the decrease in entropy that would be entailed by the creation of an elementary organism. The whole imaginary scenario is, of course, of no more help to the evolutionary case as Dawkins’ ‘ME THINKS….’ simulation (an experiment that assumes the answer is already there waiting in the wings to manifest itself), but what it does show is that if we can find a natural Maxwell demon (humans in this case) the second law is not violated even during the construction of fantastic complexity. . A Natural Maxwell Demon? Part 5 of 7 So if evolution has occurred where is its natural Maxwell demon? The feature of organisms that does not come out in your work is that unlike human artifacts organisms they are very proactive in their self-maintenance and perpetuation; if they should come into existence in the right environment (by whatever means) they are self-selecting; organisms are their own Maxwell demon. What selects an organic structure is not some external demon but the nature of the structure itself. The consequence of this is that if (I acknowledge that this is the big controversial ‘if’) inductive paths exist in configuration space for the class of self-selecting structures all the way from relatively elementary forms to highly complex organisms then there is the real possibility that the Maxwell demon effect will come into play. Here the demon effectively exists in the platonic realm of the configuration space implicitly defined by the providential physical regime. The ontology of the ‘natural demon’ is similar to a ‘law of physics’ in that it has an abstract meta-existence that stands above the stuff of the cosmos as a kind of mathematical constraint. Ergo, the natural status of the demon will entail that the formation of organisms as result of any inductive paths in configuration space being followed will not violate the second law anymore than any other natural Maxwell demon. This logical consistency of the second law with natural Maxwell demons has nothing whatever to do with the remote logical possibility of a highly improbable spontaneous formation, but it is to do with a conjectural feature of the organization of configuration space; that is, are the members of the class of self-maintaining structures juxtaposed to form as inductive set? In deference to the ID community I stress that this is a conjecture: The IC thesis denies the existence of these inductive connections, in which case evolution is prohibited by the kind of analysis you have already given: but make no mistake about it – your analysis only works if IC is assumed. . Although I favor evolution that’s not to say I don’t have my doubts about it: in particular abiogenesis is very sketchy with paleontological evidence thin on the ground and speculation rife. At the low end where n=1, and where the possible structures are far fewer in number, it is difficult to even conceive a kind of evolutionary ‘boot strap’ structure with the required proactive self-maintaining properties required to survive and so evolution may founder at the first inductive step of n = 1. Also, I am fascinated with the protein folding question and the energetics of monomer and polymer formation that you mention above – something I need to study. . If you want to find an analytical proof that self perpetuating structures are so isolated as to make evolution only possible by resort to highly improbable spontaneous leaps, perhaps there is a proof along these lines: Given quantum theory it is quite likely that configuration space has a discrete structure. The set of self-perpetuating structures may be so small relative to the size of this space that it is impossible to juxtapose so few elements into a set connected even by very thin fibrils of induction; it’s as if one is trying to photograph a thin strand at low resolution: there simply aren’t enough pixels per unit area to pick up the strand. . Summary of Issues Part 6 of 7 I am not trying to carry this off by claiming that evolution is in the bag. There is no need to tell me that evolution has its own set of problems; like just how far the slow ‘inductive’ change required by evolution is justified by the fossil record, not to mention the speculative nature of abiogenesis (which tends to raise questions over whether even n=1 is probable). It’s more a case of trying to point out some of the areas in your argument that I believe need more work. Let me list the issues I have with your work: . 1. The definition of specified information is still unclear to me. 2. Your thought experiment deals with artifacts and not proactive self-maintaining structures. 3. No distinction is made between the organized complexity of organisms and more banal states like a melting crystal that may have similar value of W. 4. Your thought experiment suggests that simple order is more difficult to achieve than ordered complexity. 5. In assuming that functional structures are isolated you assume without proof that the class of functional structures has an IC layout in configuration space. This could be either a strength or weakness depending on the truth of IC. But since the class of functional artifacts is difficult to define the truth of ID is correspondingly difficult to establish, and ditto the class of self-maintaining organic structures. 6. Your work doesn’t address the possible existence of a natural Maxwell demon. 7. Neither the second law nor its underlying principles are violated by natural Maxwell demons. . Epilogue: Part 7 of 7 In my early Christian days as a somewhat unwilling believer in YEC I was as motivated as you to show that evolution violated the second law. So I got out my pencil and paper and did some analytical work with random walk. These were the days before personal computing so analysis was the only option. A feature that I added to my models was that of putting a bias on the random walk in order to get an idea of the effects of particle interactions. The bias effectively skews the peak of the random walk step probability in one direction or the other and this is one way of modeling a ‘ratchet’ effect; in particular a varying bias has the effect of creating clumps of particles. Using this model I discovered a property that at the time looked a bit like the second law: if the step probability of the biased random walk had a sufficiently long asymptotic tail, the clumping was always only temporary: all clumps eventually dissolved and ‘heat death’ ensured. The appearance and disappearance of these clumps contained just a small hint of evolution, a hint that was a little alarming for a naive YEC. This was about as far as I got and in any case I eventually lost my somewhat affected conviction in YEC. Instead at a latter date (about fifteen years later in fact) I resurrected the work, and by inserting some complex numbers here and there in the diffusion theory I stumbled across my amateur flight of fancy: an excursion into Quantum Gravity. This resulted in a bit of vanity publishing: a book called ‘Gravity and Quantum Non-Linearity’ which can be viewed on Google book. As for the story of how I found my way into Quantum Gravity that can be found here. However, with all that behind me I am now back looking at evolution and creation. I understand that the ID community has put in lot of work and emotional investment into their thesis and therefore I wouldn’t presume to wade in and tell them that they have got it all wrong. Instead I try to adopt the same approach to my ideas that I took with my book - as perhaps ultimately flawed, but nevertheless as I like playing around with theoretical concepts and equations, I make sure that I enjoy the journey to the full even if the destination isn’t all that I had it hoped it would be.Timothy V Reeves
March 31, 2008
March
03
Mar
31
31
2008
11:19 AM
11
11
19
AM
PST
Hi Tim Thanks for the Easter wishes. Same -- belated -- to you. I've been busy elsewhere at UD and offline -- including some interesting developments on a sustainable energy project. Just let me know when you are ready to comment. GEM of TKIkairosfocus
March 27, 2008
March
03
Mar
27
27
2008
05:35 AM
5
05
35
AM
PST
Thanks very much Kfocus. This is just to acknowledge your reply and confirm that I am still in circulation. I have got as far as reading your appendices and have been contemplating them. I have a rather busy Easter weekend ahead what with family and an ailing mother, but I’ll be with you ASAP. Have a good Easter!Timothy V Reeves
March 20, 2008
March
03
Mar
20
20
2008
04:45 PM
4
04
45
PM
PST
TVR I have been busy elsewhere at UD and of course in other parts, especially in the real world. I will respond on points: 1] TVR, 44: Is second law strong enough a constraint to eliminate conventional evolution? As shown supra and in the always linked, the issue is not whether itr is logically possible but whether it is reasonable on the gamut of the observed cosmos for the scenario envisioned by the evolutionary materialists to have occurred. [This is similar to the point that it is strictly possible for all the oxygen molecules in the room in which you sit to all rush to one end, leaving you choking.] I notice you do not worry about the latter. Its probability is comparable to that of the spontaneous OOL within the gamut of our observed universe in some prebiotic soup or another. And, it is actually probably more probable than that we see the increments in biofunctional information that characrterise the origin of major body plans – a jump from mns to 100s+ mns of dna base pairs, dozens of times as just one index. Until the evo mat scenarios can adequately answer to these issues, they simply have not made theior case, especially when we know already that intelligent agents routinely generate FSCI requiring comaparable amounts of storage. AND, we see why there is such a low probability, on grounds basic to why there is a second law of thermodynamics: the non-functional configs are overwhelmingly more probable. [Indeed, had you read TBO's TMLO, you would see that the energetics to form the monomers and polymers make the equilibria ridiculous for OOL.] That is, this is inference to best explanation relative to empirically anchored facts. There is a far more credible even obvious, empirically anchored explanation available, save for question-begging re-definitions of “science” and associated censorship and frankly unjust career-busting : intelligent action. 2] Can we work back from the second law to probabilities? As shown, we can estimate relevant probabilities well enough to answer, by using the microstate and clusters principle along with the theorems of equipartition and equiprobability of microstates under relevant conditions. [Cf my nanobots-microjets roughly calculated example, which is directly related to the thermodynamics of undoing diffusion, the same diffusion that underlies much of modern solid state electronics.] 3] Does the second law lose too much information in its derivation? That is, is the second law too blunt to eliminate conventional evolution? I am not deriving the 2nd law as such, save by way of illustration. I am using its underlying principles to show the problem. Cf the microjets example again, which is accessible to someone with a reasonable high school level education. Kindly, address that. (nor does your cross-reference to DLH impress me. Not when there is right in front of you the answwer.) 4] What about contingency and laws yet to be discovered? Such “undiscovered laws” amount to a promissory note – after 150 years of trying. On inference to best explanation relative tot he laws we do know, there is already a “law” that with necessity + chance, can easily enough account for OOL etc: intelligent action. Besides, future laws will in general be compatible with the current ones; this is necessarily so, as these will have to cover the cases covered by present ones, then extend to the cases that the present ones do not cover. [Think Newtonian dynamics and quantum and relativity here.] Thirdly, if “life” is written into the basic laws of the cosmos, that looks a lot like the ultimate form of cosmological front-loading. That simply brings you up to the level of cosmological fine-tuning, for which there is already a strong case for intelligent agency as creator of the observed cosmos. Indeed, you go on to acknowledge just that: ‘Front loading’, around t=0, if it could be proved, would come under 1 5] the second law contradicts a putative concept of evolution that purports to be consistent with known laws and contingency The issue is not logical consistency, but what outcomes rise above effectively zero probability on the gamut of the cosmos. To see the force of this, observe how you routinely accept that the posts that appear in this blog are messages tracing to intelligent agents. But in fact, it is logically and physically possible – cf my always linked section A – that such is just lucky noise, with quite similar probabilities to generating functional information in a dna strand of length yielding comparable information storage capacity. The inconsistency in your judgements is what is telling: selective hyperskepticism. 6] The hot issue here isn’t whether evolution is true or not but whether the second law as currently stated contradicts evolution. Again, the issue is not bare logical possibility but sufficient probability to rise above an effective zero likelihood of occurrence on the gamut of the observed cosmos. Just as, in hypothesis testing, one has a chance of wrongly rejecting chance explanations, but confidently infers to agency on sufficiently small probability that chance is the correct explanation. The Dembski-type UPB, odds beyond 1 in 10^150 [which would in all reasonable cases mean configs isolated to better than on in 10^150 – 10^300, the latter taking in islands of functionality of up to 10^150 states easily] gives the lowest odds of incorrectly rejecting chance of any such criterion I have ever seen. 7] Dispensations and divine wills . . . Irrelevant to the inference across causal factors, chance, necessity agency, to contingent so not dominated by necessity. FSCI so not dominated by chance. Thus, agency; on best current, empirically anchored explanation. Thereafter, one may ask about agent identification, which normally proceeds on other contextual factors. For instance, OOL and OO body plan level biodiversity do not currently implicate any extra cosmic agent. However, once we look to cosmogenesis and the organised complexity and contingency of the cosmos as we see it our alternatives are in effect that the necessary being implicated by the existence of such an observed cosmos has two effective candidates: [1] quasi-infinite unobserved [perhaps unobservable] array of sub cosmi with randomly scattered physical parameters, or [2] an agent of sufficient power and intelligence to create the cosmos as we observe it. This is now metaphysics – worldview analysis – not science, albeit such analysis also influences science. On other facts and issues over coherence and explanatory power and elegance [Cf Ac 17 etc], I happen to infer that 2 is the better option. That is, the God I have known from childhood is credibly not an illusion. GEM of TKI PS: On ad hominems, kindly recognise that the well-poisoning, dismissive attack to the man is the standard resort of the darwinistas, and studiously avoid personalities.kairosfocus
March 19, 2008
March
03
Mar
19
19
2008
11:58 PM
11
11
58
PM
PST
Just a reminder: Yes, I am still about. This subject remains a hot topic with me, and so I will be posting with updates from time to time. I trust that this page will remain available for comments? If it doesn’t, not to worry because I’ll post elsewhere. I have stored and book marked this page. . FrostNNNN’s recommendation of Kfoc’s work would have been more compelling if I felt that what he was recommending what was intelligible to him – any one else care to put in a good word - preferably those with a physics/maths background? . Once again my terms of reference: The hot issue here isn’t whether evolution is true or not but whether the second law as currently stated contradicts evolution. If the latter is true it could be a short cut to ID. So the stakes are high: Such is my foreground agenda. My background agenda is to answer this question: As regards the creation of life, how does the divine will cut it between general dispensation and special dispensation? As I have already said I currently favor the general dispensation model and consequently I am road testing it. . BTW: Kfoc seems rather over sensitive about ad hominem attacks. Perusing the above it seems that I have given him no grounds to level this accusation at myself. Let me reassure Kfoc of my goodwill. I understand that if one’s name has been kicked around to such an extent that one must remove it from one’s web site, mild paranoia about such attacks is the price one pays. An imaginative reading of the silences and spaces is then all too easy. . Cheerio chaps. I’ll be back, as the saying goes.Timothy V Reeves
March 19, 2008
March
03
Mar
19
19
2008
05:47 AM
5
05
47
AM
PST
Now I'm sure you'll understand Kfoc that if I am to do your work justice - and I’m sure you want me to do your work justice - it’s going to need time commensurate with your valiant efforts. But let me be bluntly frank Kfoc, yours is not the only work I trying to do justice to. The points you have attempted to make are just part of a more extensive investigation of several sources all of which justly vie for my attention. I crave your patience in accepting that your points must take their rightful place in this investigation and not be at the head driving the investigation alone; their value should be neither inflated nor underestimated. At first look however your own work does seem to contain some small problems, but I’ll let you know how that pans out in due course. I realize that you are anxious to get feedback on your work, but please be patient. . Let me repeat my terms of reference here in question form: Is second law strong enough a constraint to eliminate conventional evolution? Can we work back from the second law to probabilities? Does the second law lose too much information in its derivation? That is, is the second law too blunt to eliminate conventional evolution? There is some doubt here it seems, because as DLH concedes at 32 above: Your bluntness objection is worth exploring. At least on trying to explain systems….. So the challenge of developing new formulations that clearly distinguish between CSI and physical “order” etc. It’s on my To Do list.. So Kfoc, if you think you have sorted this one out, go and tell DLH and then he can cross an item off his To Do list. . Just to make sure I’ve understood you correctly FrostNNN here in my own terms is a digest of what I understand to be your salient points: . 1. You do not believe that a general dispensation combining known laws and the contingency (Viz random configurations in space and time) is sufficient to explain the creation of life. (What about contingency and laws yet to be discovered?) 2. You believe that the creation of life involves a series of special dispensations of creation over the history of the universe. (‘Front loading’, around t=0, if it could be proved, would come under 1) 3. You believe that the second law contradicts a putative concept of evolution that purports to be consistent with known laws and contingency. 4. You distinguish between an evolution driven by known laws and contingency, and an evolution driven by special dispensations of information. 5. You accept conventional paleontological history but don’t accept that the engine driving this history is to be found only from known laws and contingency. . I hope that doesn’t misrepresent your position, but that’s how I understand it at present; don’t hesitate to put me right if I’ve misrepresented anything. At the moment I’m test-driving the general dispensation theory. However, to put this theory through its paces it needs to either pass or fail at point 3. This seems an easier investigation to carry out than put it through IC test, because the second law looks to be more analytically amenable than IC. Clearly, if point 3 can be proved I would have to review the general dispensation model.Timothy V Reeves
March 17, 2008
March
03
Mar
17
17
2008
11:54 AM
11
11
54
AM
PST
TVR: I continue to await your response regarding the stat mech considerations on OOL and related origin of functionally specified complex information [FSCI] on the merits, and note that Frosty has pointed you my way. Cheerio GEM of TKIkairosfocus
March 17, 2008
March
03
Mar
17
17
2008
03:33 AM
3
03
33
AM
PST
"Let me clarify my aims. I’m trying to establish at what level the information is ‘coming in’, so to speak, in order to create life. Is the creation of life a general dispensation consistent with Divinely designed and sustained processes, or is it a product of several special dispensatory creative acts spread over the long history of Earth? I currently favor the former view, although I acknowledge that the ID notion of IC is a robust challenge to evolutionary theory and requires some serious consideration."
Yes lets indeed talk about the structure of the universe and the nature of physical matter/energy. According to Einstein's General Relativity time and space as one is curved in situations dependent upon the nature of local gravitational fields. During the big bang the assembly instructions were built into the first cause which transcended matter. After the universe began to explode and take shape matter (and it is still happening today) started to assemble based upon those front loaded instructions. Now, those front loaded instructions are not simply existent only in the beginning- they are acting and developing al of the time. This is the nature of time. So the improbable organization of matter that we see in the universe tells us that the laws of nature can be suspended (time and space) as well as the second law- when CSI arises. The matter does not come from anywhere new- it is ever present it simply moves around like an engineer moves nuts and bolts (invisible hand) to build a motor. The universe is al part of one act- or as Shakespeare said ""All the world may indeed be a stage." There is this idea that is implanted into our minds as kids that physical laws or laws of any sort cannot be broken. This is obviously not true if you think about human law (ie OJ Simpson). The designer's laws are only inferred based upon physical empirical analysis. Just because we had not seen a block hole in 1900 didn't mean that they didn't exist. The laws of physics are merely human designed rules that are either rarely broken or where we have yet to discover a case where they are broken. Interestingly the second law must have been broken for specified complex live to arise, and, paradoxically, if the second law holds true- then unintelligent evolution could not have taken place. The only way in which unintelligent processes could have produced CSI is if we discover some matrix of evidence that the universe is completely random and large enough to produce the probabilistic resources necessary for live to arise.
"I am indeed taking into consideration “…that you don’t have to break the second law for random evolution to be discredited”, because in this very comment thread the efficacy of the second law as an evolutionary roadblock is at issue. If it doesn’t roadblock evolution then there is still IC to consider."
Lets make sure that we are on the same plane of thought here. The second law prevents random evolution not evolution per se. One of the problems people often have in this debate is the concept of the formal interpretation of change over time or common descent called "evolution." The other is the interpretation of evolution to signify a Godless, design-less universe. IC does in fact present an even greater formal physical structural model for "random" or blind evolution to get around. Now the second law shows that it is in fact highly improbable (almost physically impossible) that SC could arise in a physical system without "assembly instructions." One again this is a critique of random evolution or "materialistic" evolution because we are not saying that "information" that is complex and purposive is required to assemble complex specified life. I am not great in math and physics so to understand and interpret the second law exactly, you’ll have to ask Kariosfocus. The bottom line here is exactly what Wittgenstein said on pp. 149 of the Tractatus… “The solution of the riddle of life in space and time lies OUTSIDE space and time.” Here he means logically not physically because he is talking about an existence that is not physical. Perhaps intelligence fit’s the bill.Frost122585
March 16, 2008
March
03
Mar
16
16
2008
08:53 PM
8
08
53
PM
PST
Frost 122585: I largely agree with what you have said there. As a theist I would certainly want to make mention of physical laws (and contingent complexity) and give credit to them as a medium of Divine providence. In fact we needn’t even go back to the ‘first cause’ to find a mystery: I know of no logical reason why the universe should continue from moment to moment and therefore I take this as evidence of the power of Aseity sustaining a contingent cosmos everywhere and everywhen. “In Him we live and move and have our being….” Hence I’m inclined to agree with your view that comprehensible laws are a sign of providence. . Let me clarify my aims. I’m trying to establish at what level the information is ‘coming in’, so to speak, in order to create life. Is the creation of life a general dispensation consistent with Divinely designed and sustained processes, or is it a product of several special dispensatory creative acts spread over the long history of Earth? I currently favor the former view, although I acknowledge that the ID notion of IC is a robust challenge to evolutionary theory and requires some serious consideration. . I am indeed taking into consideration “…that you don’t have to break the second law for random evolution to be discredited”, because in this very comment thread the efficacy of the second law as an evolutionary roadblock is at issue. If it doesn’t roadblock evolution then there is still IC to consider. However if I am reading Kairosfocus correctly, then I understand that he believes he has some work showing the second law to be an evolution stopper, so I had better give his work some time. Have your read Kairosfocus work? Do you have an opinion on it?Timothy V Reeves
March 16, 2008
March
03
Mar
16
16
2008
07:01 AM
7
07
01
AM
PST
TRV says,
"Crystals crystallize because the physical regime providentially provides for a relatively simple morphospace containing ‘contours of stability’, ‘ratchets of formation’, or ‘Dawkins slopes’ or whatever you want to call them. In this process there are no imports of explicit information across the boundary, but there is an export of randomized kinetic energy (that is, heat) maintaining a positive overall value in the entropy increment.”
No that is incorrect. There is import of explicit information all the way through. The various forces that are at work contain within themselves a given probabilistic event calculation. That is to say that you actually have an increase in information all the way through the process because as the specified complexity increases the probability of the event occurring from a random field of particles decreases. It is in this sense that information is directly necessitated among various events such as crystal formation. One of the tricks that is employed by Darwinism is that it gives no mention or credit to the laws themselves. That is to say that everything is just an unguided natural process to Darwinists “except” those things which are governed by natural physical law. The point that you need to take into consideration is that you don’t have to break the second law for random evolution to be discredited. In fact the comprehensible laws of the universe such as the second law are exactly what you would expect to find in a universe that is intelligently designed.
A similar analysis may carried out on the heat pouring in from the Sun to the Earth. The physics of the Earth system uses the low entropy of the sun/outer space temperature gradient to produce organized forms of energy, namely kinetic energy (winds) and potential energy (clouds). So pumping in heat into something can produce order at least at the organizational low end.”
What you have done here is appealed to the ever despised “infinite regress argument” simply moving the energy and information “backwards” in its time line to get around the claim of entropy. We can go all the way back to a first cause if you like but within the fist cause, either in the laws which govern matter or within the matter itself, is assembly instructions that account for all of the laws that we discover in action throughout the universe. The probabilistic scenarios immediately reject natural materialistic chance (such as dice throwing) and them we are left with the question of what source can CSI be arranged from? Intelligence is the best and only workable inference of which I am aware.Frost122585
March 16, 2008
March
03
Mar
16
16
2008
01:58 AM
1
01
58
AM
PST
TVR: You raised a challenge on the merits [rhetorical stratagems aside], and I have responded on the merits. Kindly, therefore, address the challenge on the merits. Your move . . . GEM of TKIkairosfocus
March 16, 2008
March
03
Mar
16
16
2008
01:16 AM
1
01
16
AM
PST
Thanks Kairosfocus for honoring me with a long reply. I realize that there are few things more frustrating than believing you have achieved, through quite strenuous and time consuming effort, some useful conclusions, and then someone coming along and pronouncing as if you have never spoken. As you can see from 33 and 35 above DLH has given me some interesting reading (and some re-reading), which I am currently going through so I’ll bundle your stuff along with the stuff DLH has given me. This should keep my nose down for a bit, before I start pronouncing again. As for me, I’m just an amateur science dabbler, with no reputation to think about, or face saving to be done or lecture circuit ‘customer base’ to satisfy. I like to travel light. Sorry you haven’t got in me anyone of anyone status to look at your work. But consider it I will. However, as you have shrewdly observed I can also be a pretty nasty piece of work when I want to: issue evasions, sneaky ad hominem, a master of innuendo. You’ve seen through me pretty quickly haven’t you? Unless you want me to just run away with my tail between my legs then you had better watch your back! Alternatively I might just decide to stick around like a bad smell, in which case the UD experience will become a little more eclectic for you.Timothy V Reeves
March 15, 2008
March
03
Mar
15
15
2008
06:01 PM
6
06
01
PM
PST
PS: I might as well add in some remarks on the stat thermo-D form of 2 LOT:
4] Yavorski and Pinski, in the textbook Physics, Vol I [MIR, USSR, 1974, pp. 279 ff.], summarise the key implication of the macro-state and micro-state view well: as we consider a simple model of diffusion, let us think of ten white and ten black balls in two rows in a container. There is of course but one way in which there are ten whites in the top row; the balls of any one colour being for our purposes identical. But on shuffling, there are 63,504 ways to arrange five each of black and white balls in the two rows, and 6-4 distributions may occur in two ways, each with 44,100 alternatives. So, if we for the moment see the set of balls as circulating among the various different possible arrangements at random, and spending about the same time in each possible state on average, the time the system spends in any given state will be proportionate to the relative number of ways that state may be achieved. Immediately, we see that the system will gravitate towards the cluster of more evenly distributed states. In short, we have just seen that there is a natural trend of change at random, towards the more thermodynamically probable macrostates, i.e the ones with higher statistical weights. So "[b]y comparing the [thermodynamic] probabilities of two states of a thermodynamic system, we can establish at once the direction of the process that is [spontaneously] feasible in the given system. It will correspond to a transition from a less probable to a more probable state." [p. 284.] This is in effect the statistical form of the 2nd law of thermodynamics. Thus, too, the behaviour of the Clausius isolated system above is readily understood: importing d'Q of random molecular energy so far increases the number of ways energy can be distributed at micro-scale in B, that the resulting rise in B's entropy swamps the fall in A's entropy. Moreover, given that FSCI-rich micro-arrangements are relatively rare in the set of possible arrangements, we can also see why it is hard to account for the origin of such states by spontaneous processes in the scope of the observable universe. (Of course, since it is as a rule very inconvenient to work in terms of statistical weights of macrostates [i.e W], we instead move to entropy, through s = k ln W. Part of how this is done can be seen by imagining a system in which there are W ways accessible, and imagining a partition into parts 1 and 2. W = W1*W2, as for each arrangement in 1 all accessible arrangements in 2 are possible and vice versa, but it is far more convenient to have an additive measure, i.e we need to go to logs. The constant of proportionality, k, is the famous Boltzmann constant and is in effect the universal gas constant, R, on a per molecule basis, i.e we divide R by the Avogadro Number, NA, to get: k = R/NA. The two approaches to entropy, by Clausius, and Boltzmann, of course, correspond. In real-world systems of any significant scale, the relative statistical weights are usually so disproportionate, that the classical observation that entropy naturally tends to increase, is readily apparent.)
Now, put this to work:
i] Consider the assembly of a Jumbo Jet, which requires intelligently designed, physical work in all actual observed cases. That is, orderly motions were impressed by forces on selected, sorted parts, in accordance with a complex specification. (I have already contrasted the case of a tornado in a junkyard that it is logically and physically possible can do the same, but the functional configuration[s] are so rare relative to non-functional ones that random search strategies are maximally unlikely to create a flyable jet, i.e. we see here the logic of the 2nd Law of Thermodynamics, statistical thermodynamics form, at work. [Intuitively, since functional configurations are rather isolated in the space of possible configurations, we are maximally likely to exhaust available probabilistic resources long before arriving at such a functional configuration or "island" of such configurations (which would be required before hill-climbing through competitive functional selection, a la Darwinian natural Selection could take over . . . ); if we start from an arbitrary initial configuration and proceed by a random walk.]) ii] Now, let us shrink the Hoylean example, to a micro-jet so small [~ 1 cm or even smaller] that the parts are susceptible to Brownian motion, i.e they are of about micron scale [for convenience] and act as "large molecules." . . . Let's say there are about a million of them, some the same, some different etc. In principle, possible: a key criterion for a successful thought experiemnt. Next, do the same for a car, a boat and a submarine, etc. iii] In several vats of "a convenient fluid," each of volume about a cubic metre, decant examples of the differing mixed sets of nano-parts; so that the particles can then move about at random, diffusing through the liquids as they undergo random thermal agitation. iv] In the control vat, we simply leave nature to its course. Q: Will a car, a boat a sub or a jet, etc, or some novel nanotech emerge at random? [Here, we imagine the parts can cling to each other if they get close enough, in some unspecified way, similar to molecular bonding; but that the clinging force is not strong enough at appreciable distances [say 10 microns or more] for them to immediately clump and precipitate instead of diffusing through the medium.] ANS: Logically and physically possible (i.e. this is subtler than having an overt physical force or potential energy barrier blocking the way!) but the equilibrium state will on statistical thermodynamics grounds overwhelmingly dominate — high disorder. Q: Why? A: Because there are so many more accessible scattered state microstates than there are clumped-at -random state ones, or even moreso, functionally configured flyable jet ones . . . . v] Now, pour in a cooperative army of nanobots into one vat, capable of recognising jet parts and clumping them together haphazardly. [This is of course, work, and it replicates bonding at random. Work is done when forces move their points of application along their lines of action. Thus in addition to the quantity of energy expended, there is also a specificity of resulting spatial rearrangement depending on the cluster of forces that have done the work . . . . Q: After a time, will we be likely to get a flyable nano jet? A: Overwhelmingly, on probability, no. (For, the vat has ~ [10^6]^3 = 10^18 one-micron locational cells, and a million parts or so can be distributed across them in vastly more ways than they could be across say 1 cm or so for an assembled jet etc or even just a clumped together cluster of micro-parts. [a 1 cm cube has in it [10^4]^3 = 10^12 cells, and to confine the nano-parts to that volume obviously sharply reduces the number of accessible cells consistent with the new clumped macrostate.] But also, since the configuration is constrained, i.e. the mass in the microjet parts is confined as to accessible volume by clumping, the number of ways the parts may be arranged has fallen sharply relative to the number of ways that the parts could be distributed among the 10^18 cells in the scattered state . . . . vi] For this vat, next remove the random cluster nanobots, and send in the jet assembler nanobots. These recognise the clumped parts, and rearrange them to form a jet, doing configuration work. (What this means is that within the cluster of cells for a clumped state, we now move and confine the parts to those sites consistent with a flyable jet emerging. That is, we are constraining the volume in which the relevant individual parts may be found, even further.) A flyable jet results — a macrostate with a much smaller statistical weight of microstates. We can see that of course there are vastly fewer clumped configurations that are flyable than those that are simply clumped at random, and thus we see that the number of microstates accessible due to the change, [a] scattered --> clumped and now [b] onward --> functionally configured macrostates has fallen sharply, twice in succession. Thus, by Boltzmann's result s = k ln W, we also have seen that the entropy has fallen in succession as we moved form one state to the next, involving a fall in s on clumping, and a further fall on configuring to a functional state; dS tot = dSclump + dS config. [Of course to do that work in any reasonable time or with any reasonable reliability, the nanobots will have to search and exert directed forces in accord with a program, i.e this is by no means a spontaneous change, and it is credible that it is accompanied by a compensating rise in the entropy of the vat as a whole and its surroundings. This thought experiment is by no means a challenge to the second law. But, it does illustrate the implications of the probabilistic reasoning involved in the microscopic view of that law, where we see sharply configured states emerging from much less constrained ones.]
So, by scaling down Sir Fred Hoyle's 747 by a tornado in a junkyard remarks, we can see how the stat mech principles underlying the 2 LOt apply to OOL. On OO body plan level biodiversity:
viii] Now, let us go back to the vat. For a large collection of vats, let us now use direct microjet assembly nanobots, but in each case we let the control programs vary at random a few bits at a time -– say hit them with noise bits generated by a process tied to a zener noise source. We put the resulting products in competition with the original ones, and if there is an improvement, we allow replacement. Iterate, many, many times. Q: Given the complexity of the relevant software, will we be likely to for instance come up with a hyperspace-capable spacecraft or some other sophisticated and un-anticipated technology? (Justify your answer on probabilistic grounds.) My prediction: we will have to wait longer than the universe exists to get a change that requires information generation (as opposed to information and/or functionality loss) on the scale of 500 – 1000 or more bits. [See the info-generation issue over macroevolution by RM + NS?] ix] Try again, this time to get to even the initial assembly program by chance, starting with random noise on the storage medium. See the abiogenesis/ origin of life issue?
Okay, I think that is enough to spark discussion on the merits. GEM of TKIkairosfocus
March 15, 2008
March
03
Mar
15
15
2008
09:17 AM
9
09
17
AM
PST
TVR: First, please, deal with the issue, do not attack the man -- whether directly or by subtle insinuations. [Onlookers: Had TVR taken time to glance at the Appendix 1 he would have seen that it answers, step by step, to key issues based on foundational thermodynamics principles accessible to one who has done a first college physical science course and a similar mathematical course. Indeed, there is even a link to basic presentations of the underlying science. In so doing, it adverts to the Thaxton et al work of 1984 and uses fairly accessible standard results and reasoning in the context of drawing out the implications of thermodynamics and associated statistical mechanics principles, for the claimed OOL and OO body-plan level biodiversity. The issue is the chain of reasoning and evidence, not me and who or what I am. To dodge the issue to attack the man directly or indirectly (by insinuations and loaded language) is to forfeit the issue.] Having notes such, I will pause to remark on pints of significance, observing that the below is not a substitute for what I have already linked: 1] Roadblocks: We observe that three causal factors are commonly encountered: chance and/or natural lawlike forces giving rise to natural regularities, agency. Situations of complexity have high contingency not explicable by natural regularities alone. Where we have complex, functionally specified information, chance forces are incapable of accounting for these phenomena on the gamut of the observed cosmos; due to probabilistic resource exhaustion. This is, in a nutshell as well the basic framework for the statistical justification for 2 LOT. So, by the same principles as we use to justify 2 LOT, there is a barrier to OOL and OO body plan level biodiversity In cases of entities where multiple components must be fitted together to achieve a function or else it will fail to work, a similar issue obtains: origin of complex [beyond 500 – 1,000 bits of information storage] body plans, synthesis of their components and assembly -- even inclusive of co-option of existing parts – is maximally improbable for RV + NS on the scope of the observed cosmos. The Cambrian life revolution is a capital case in point, where there is need to account for dozens of phyla and sub-phyla at once, within a short window on earth. 2] Who's you . . . And, Mr Reeves, who are YOU? More to the point, I am a scientist and science educator in my own right who has looked at the issue for himself. I share my reasoning and conclusions, and discuss them in especially this blog with its many participants, a significant number of whom have regarded my remarks as a valuable contribution on the merits. I invite you to address these issues on the said merits. 3] The second law as it stands puts the constraint dS/dt > 0 on the total system. Correct. As perusal of Clausius' first example will show, as I discuss in App 1 as always linked, a hotter subsystem giving up d'Q to a cooler one will undergo entropy loss overbalanced by the entropy rise in the cooler system. But this immediately implies that an energy-importing system naturally tends to INCREASE its entropy. The way around that, is to go to systems that couple input energy to do work, exhausting waste heat in the process so that overall entropy rises even as local order is created. As I discussed in the always linked (bringing the Mountain to Mohammed . . .):
2] But open systems can increase their order: This is the "standard" dismissal argument on thermodynamics, but it is both fallacious and often resorted to by those who should know better. My own note on why this argument should be abandoned is: a] Clausius is the founder of the 2nd law, and the first standard example of an isolated system -- one that allows neither energy nor matter to flow in or out -- is instructive, given the "closed" subsystems [i.e. allowing energy to pass in or out] in it. Pardon the substitute for a real diagram, for now: Isol System: | | (A, at Thot) --> d'Q, heat --> (B, at T cold) | | b] Now, we introduce entropy change dS >/= d'Q/T . . . "Eqn" A.1 c] So, dSa >/= -d'Q/Th, and dSb >/= +d'Q/Tc, where Th > Tc d] That is, for system, dStot >/= dSa + dSb >/= 0, as Th > Tc . . . "Eqn" A.2 e] But, observe: the subsystems A and B are open to energy inflows and outflows, and the entropy of B RISES DUE TO THE IMPORTATION OF RAW ENERGY. f] The key point is that when raw energy enters a body, it tends to make its entropy rise. For the injection of energy to instead do something useful, it needs to be coupled to an energy conversion device. g] When such devices, as in the cell, exhibit FSCI, the question of their origin becomes material, and in that context, their spontaneous origin is strictly logically possible but negligibly different from zero probability on the gamut of the observed cosmos. (And, kindly note: the cell is an energy importer with an internal energy converter. That is, the appropriate entity in the model is B and onward B' below. Presumably as well, the prebiotic soup would have been energy importing, and so materialistic chemical evolutionary scenarios therefore have the challenge to credibly account for the origin of the FSCI-rich energy converting mechanisms in the cell relative to Monod's "chance + necessity" [cf also Plato's remarks] only.) h] Now, as just mentioned, certain bodies have in them energy conversion devices: they COUPLE input energy to subsystems that harvest some of the energy to do work, exhausting sufficient waste energy to a heat sink that the overall entropy of the system is increased. Illustratively, for heat engines: | | (A, heat source: Th): d'Qi --> (B', heat engine, Te): --> d'W [work done on say D] + d'Qo --> (C, sink at Tc) | | i] A's entropy: dSa >/= - d'Qi/Th j] C's entropy: dSc >/= + d'Qo/Tc k] The rise in entropy in B, C and in the object on which the work is done, D, say, compensates for that lost from A. The second law holds for heat engines. l] However for B since it now couples energy into work and exhausts waste heat, does not necessarily undergo a rise in entropy having imported d'Qi. [The problem is to explain the origin of the heat engine -- or more generally, energy converter -- that does this, if it exhibits FSCI.] m] There is also a material difference between the sort of heat engine [an instance of the energy conversion device mentioned] that forms spontaneously as in a hurricane [directly driven by boundary conditions in a convective system on the planetary scale, i.e. an example of order], and the sort of energy conversion device found in living cells [the DNA-RNA-Ribosome-Enzyme system, which exhibits massive FSCI]. n] In short, the root problem is the ORIGIN of such a FSCI-based energy converter through causal mechanisms traceable only to chance conditions and undirected [non-purposive] natural forces. This problem yields a conundrum for chem evo scenarios, such that inference to agency as the probable cause of such FSCI -- on the analogy of the cases where we do directly know the causal story -- becomes the better explanation. As TBO say, in bridging from a survey of the basic thermodynamics of living systems in CH 7, to that more focussed discussion in ch's 8 - 9: "While the maintenance of living systems is easily rationalized in terms of thermodynamics, the origin of such living systems is quite another matter. Though the earth is open to energy flow from the sun, the means of converting this energy into the necessary work to build up living systems from simple precursors remains at present unspecified (see equation 7-17). The "evolution" from biomonomers of to fully functioning cells is the issue. Can one make the incredible jump in energy and organization from raw material and raw energy, apart from some means of directing the energy flow through the system? In Chapters 8 and 9 we will consider this question, limiting our discussion to two small but crucial steps in the proposed evolutionary scheme namely, the formation of protein and DNA from their precursors. It is widely agreed that both protein and DNA are essential for living systems and indispensable components of every living cell today.11 Yet they are only produced by living cells. Both types of molecules are much more energy and information rich than the biomonomers from which they form. Can one reasonably predict their occurrence given the necessary biomonomers and an energy source? Has this been verified experimentally? These questions will be considered . . . [Cf summary in the peer-reviewed journal of the American Scientific Affiliation, "Thermodynamics and the Origin of Life," in Perspectives on Science and Christian Faith 40 (June 1988): 72-83, pardon the poor quality of the scan. NB:as the journal's online issues will show, this is not necessarily a "friendly audience."]
4] This is too weak a constraint to eliminate decreases in entropy in the subsystems and as entropy does not provide for a one-on-one measure of organized complexity the second law leaves open the question of whether an increase in order in a subsystem is due to the appearance of organized complexity or something else more banal. Already addressed, cf supra. 5] The second law is a derivative of probabilities and statistical weighting, but like other derivative products there may be a loss of content in the derivative process Kindly cf my thought experiment at point 6 in the same appendix as already linked. One may show that there is a fall in w as we move from scattered to clumped at random to functionally configured states, and that the fall is so large in each case that the basic point is plain. 6] I hope I won’t find myself on the receiving end of any spiritual bullying that in some cases is the abrasive ID match to the mockery and insult. I have invited discussion on the merits; why are you presuming or suggesting that I would set out to insult and attack unprovoked? [Onlookers: is such behaviour by TVR not simply a subtler form of ad hominem?] I again invite discussion on the merits GEM of TKIkairosfocus
March 15, 2008
March
03
Mar
15
15
2008
08:53 AM
8
08
53
AM
PST
Timothy V Reeves See also: DLH comment #88 under Does Darwinian Evolution include the Origin of Life. I consider the Origin of Life another barrier (or a subset of the Second Law). Darwinian Evolution requires self replicating life for "natural selection." For the same materialistic assumptions, the Origin of Life is an even greater challenge to Darwinian Evolution, since it cannot rely on "natural selection" to supposedly come up with the very high Complex Specified Information in even the simplest self reproducing cell - which all has to be there and functioning for "evolution" to continue.DLH
March 14, 2008
March
03
Mar
14
14
2008
02:22 PM
2
02
22
PM
PST
Thanks fo rthat DLH. I'll do some investigations.Timothy V Reeves
March 14, 2008
March
03
Mar
14
14
2008
01:51 PM
1
01
51
PM
PST
Timothy V. Reeves Please read again: A Second Look at the Second Law Granville Sewell Especially:
But getting the right number on 5 or 6 balls is not extremely improbable, in thermodynamics "extremely improbable" events involve getting the "right number" on 100,000,000,000,000,000,000,000 or so balls! If every atom on Earth bought one ticket every second since the big bang (about 10^70 tickets) there is virtually no chance than any would ever win even a 100-ball lottery, much less this one. And since the second law derives its authority from logic alone, and thus cannot be overturned by future discoveries, Sir Arthur Eddington called it the "supreme" law of Nature [The Nature of the Physical World, McMillan, 1929].
David Aikman observes:
Schroeder applied probability theory to the “Monkey Theorem” and calculated that the chance of getting Sonnet Eighteen by chance was 26 multiplied by itself 488 times (488 is the number of letters in the sonnet) or, in base 10, 10 to the 690th. . . . As Flew concluded, “if the theorem [the Monkey Theorem] won’t work for a single sonnet, then of course it’s simply absurd to suggest that the more elaborate feat of the origin of life could have been achieved by chance.”
DLH
March 14, 2008
March
03
Mar
14
14
2008
07:47 AM
7
07
47
AM
PST
TVR at 29 and 31
All I am saying is that the second law, as it stands, is too blunt an instrument to eliminate evolution. I would be saying these things even if I became convinced of ID.
Your bluntness objection is worth exploring. At least on trying to explain systems. Increasing probability or entropy is the cause for destroying both physical "order" and design information (the computer hard drive etc). Evolution arguments given vs entropy point to physical "order" like crystallization. However, such local reductions in physical "order" cannot explain formation of CSI. Granville's arguments on "order" entering the system are particularly meaningful regarding CSI as in the encyclopedia etc. So the challenge of developing new formulations that clearly distinguish between CSI and physical "order" etc. Its on my To Do list.DLH
March 14, 2008
March
03
Mar
14
14
2008
07:39 AM
7
07
39
AM
PST
Let me clarify my perspective Kairosfocus. The ID community suggests that there are two major roadblocks on the conjectured highway of evolutionary development. Viz: . 1. The second Law (The ostensive subject of this comment thread) 2. Irreducible complexity . I’m certainly interested in the work of the ID community and wish to find out whether either or both of the above block evolution. Now you are obviously very proud of your work Kairosfocus, but before I give it more than a general perusal I need answers to these questions: . 1. Would you say that your work successfully demonstrates that one or both of the above points are roadblocks on the evolutionary road? Or perhaps you have found other roadblocks? . 2. Where do you fit in the ID constellation? Obviously I need to continue to get to grips with the work of the star performers like Behe, Dembski and Granville, but how does the ID community – especially the contributors to this blog - react to your work? Can you point me to any comments on your work by the ID community? Perhaps they (e.g. Dembski and Granville) could even give me some recommendations in this thread. After all, ‘Self praise is no recommendation’! . On a technical note, let me confess that I still don’t see why the second law, as I understand it to be formulated, roadblocks evolution. If we take the log of the quantity W = PRODUCT OVER wi (where wi = microstates consistent with the macrostate of subsystem i ) then we can show that the total entropy S of the system is equal to the sum of the entropies over the sub systems. The second law as it stands puts the constraint dS/dt > 0 on the total system. This is too weak a constraint to eliminate decreases in entropy in the subsystems and as entropy does not provide for a one-on-one measure of organized complexity the second law leaves open the question of whether an increase in order in a subsystem is due to the appearance of organized complexity or something else more banal. This is not to say, of course, that from other considerations (such as IC) organized complexity can be shown to be overwhelmingly improbable. The second law is a derivative of probabilities and statistical weighting, but like other derivative products there may be a loss of content in the derivative process and alas for the ID community the second law is not readily reversed to derive probabilities. I have yet to hack into your writings in earnest, but I am interested to see how clearly the concept of organized complexity comes out in your work. . Many critics of ID treat people like yourself as if you are only worthy of insult and mockery, as I am sure you have experienced. As I endeavor to approach the whole subject by practicing a discipline of studied detachment and fairness I hope I won’t find myself on the receiving end of any spiritual bullying that in some cases is the abrasive ID match to the mockery and insult.Timothy V Reeves
March 14, 2008
March
03
Mar
14
14
2008
06:00 AM
6
06
00
AM
PST
1 2

Leave a Reply