Uncommon Descent Serving The Intelligent Design Community

And once more: Life can arise naturally from chemistry!

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Yet it isn’t happening, and we have no idea how it happened even once…

From science writer Michael Gross at Cell:

Rapid progress in several research fields relating to the origin of life bring us closer to the point where it may become feasible to recreate coherent and plausible models of early life in the laboratory. (paywall)

It’s a survey article, and it concludes:

on our own planet and on many others.
“One of the main new aspects of origins research is the growing effort to connect chemistry to geology,” Jack Szostak notes. “Finding reasonable geological settings for the origin of life
is a critical aspect of understanding the whole pathway. We’ve moved beyond thinking that life emerged from the oceans or at deep sea hydrothermal vents. New ideas for surface environments that could allow organic materials to accumulate over time, so that prebiotic chemistry could happen in very concentrated solutions, are a big advance.”

We can conclude from all of this that the emergence of life in a universe that provides a suitable set of conditions, like ours does, is an entirely natural process and does not require the
postulate of a miracle birth. (Current Biology 26, R1247–R1271, December 19, 2016 R1247–49)More.

Okay. “Moved beyond” is a way of saying that hydrothermal vents are not the answer after all.

Coherent and plausible models in the lab are not the same thing as knowing what happened. And the more of them there are, the more necessary it would become to explain why life isn’t coming into existence all over the place all the time.

And at times, we are not even sure what we mean. Do some viruses meet the criterion of being alive?

A friend writes to ask: “Imagine how it would sound if a study on any other topic had the words “does not require the postulate of a miracle” in the conclusion. Somehow they seem to think that it is perfectly appropriate and natural when discussing the origin of life.”

Aw, let’s be generous, it’s New Year’s Eve: When people really haven’t got very far in a discipline for the better part of two centuries, they tend to think in terms of zero or miracle. That’s just what they do.

Another friend writes to say that the thesis seems to be: Given enough time, anything can happen. If so, the proposition does not really depend on evidence. In 1954, mid-20th century Harvard biochemist George Wald wrote,

Time is in fact the hero of the plot. The time with which we have to deal is of the order of two billion years. What we regard as impossible on the basis of human experience is meaningless here. Given so much time, the “impossible” becomes possible, the possible probable, and the probable virtually certain. One has only to wait: time itself performs the miracles. (From TalkOrigins, Wald, Scientific American, p. 48).

Really? Physicist Rob Sheldon has doubts:

In physics, we discuss reversible and irreversible reactions. If entropy (or information) is unchanged, then the system is reversible. If entropy increases (loss of information), then the reaction cannot be reversed. Outside of Darwin’s theory of evolution, there are no irreversible reactions in which entropy decreases (information is gained), because that would enable a perpetual motion machine.

Thus time is of no benefit for evolution, since a perpetual motion machine is no more possible if it runs slowly than if it runs quickly. And while errors may persist in biology because it may be too complicated to be sure of the entropy, the same cannot be said of chemistry. So the biggest boondoggle of all is attributing to precise and exact chemistry the magical anti-entropy properties of inexact and imprecise biology simply because one is a materialist reductionist who thinks life is a substance. I’m not picking on chemists or biologists, because I’ve even heard physicists say that evolution causes the multiverse to spawn us. Evidently this anti-entropy magic is just too powerful to keep it bottled up in biology alone, the world needs more perpetual motion salesmen, they spontaneously generate.

Oh well, happy New Year.

See also: Researchers: Bacteria fossils predate the origin of oxygen

Rob Sheldon: Why the sulfur-based life forms never amounted to much

Welcome to “RNA world,” the five-star hotel of origin-of-life theories

and

What we know and don’t know about the origin of life

Follow UD News at Twitter!

Comments
Meanwhile, there's no evidence that life can arise from chemistry. Mung
We might profit more from reading the Russians. I have the book by A.I. Khinchin. Mathematical Foundations of Information Theory He even has one translated by George Gamow. I should check that out. Mathematical Foundations of Statistical Mechanics Mung
Mung, those are not exact words, I paraphrase. KF kairosfocus
...so we see why if we consider the observed cosmos as an isolated system — something Sears and Salinger pointed out as philosophically loaded in their textbook, the one from which I first seriously studied these matters...
I like this quote. Mung
Headlined, with diagrams: https://uncommondescent.com/intelligent-design/of-s-t-r-i-ng-s-nanobots-informational-statistical-thermodynamics-and-evolution/ kairosfocus
PPS: I found an elementary introduction to statistical entropy very helpful, from the Russian authors Yavorski and Pinsky, in their Physics, vol I [1974]: as we consider a simple model of diffusion, let us think of ten white and ten black balls in two rows in a container. There is of course but one way in which there are ten whites in the top row; the balls of any one colour being for our purposes identical. But on shuffling, there are 63,504 ways to arrange five each of black and white balls in the two rows, and 6-4 distributions may occur in two ways, each with 44,100 alternatives. So, if we for the moment see the set of balls as circulating among the various different possible arrangements at random, and spending about the same time in each possible state on average, the time the system spends in any given state will be proportionate to the relative number of ways that state may be achieved. Immediately, we see that the system will gravitate towards the cluster of more evenly distributed states. In short, we have just seen that there is a natural trend of change at random, towards the more thermodynamically probable macrostates, i.e the ones with higher statistical weights. So "[b]y comparing the [thermodynamic] probabilities of two states of a thermodynamic system, we can establish at once the direction of the process that is [spontaneously] feasible in the given system. It will correspond to a transition from a less probable to a more probable state." [p. 284.] This is in effect the statistical form of the 2nd law of thermodynamics. Thus, too, the behaviour of the Clausius isolated system of A and B with d'Q of heat moving A --> B by reason of B's lower temperature is readily understood: First, -d'Q/T_a is of smaller magnitude than + d'Q/T_b, as T_b is less than T_a and both are positive values; so we see why if we consider the observed cosmos as an isolated system -- something Sears and Salinger pointed out as philosophically loaded in their textbook, the one from which I first seriously studied these matters -- then a transfer or energy by reason of temperature difference [i.e. heat] will net increase entropy. Second, we bridge to the micro view if we see how importing d'Q of random molecular energy so far increases the number of ways energy can be distributed at micro-scale in B, that the resulting rise in B's entropy swamps the fall in A's entropy. That is, we have just lost a lot more information about B's micro-state than we gained about A's. Moreover, given that FSCO/I-rich micro-arrangements are relatively rare in the set of possible arrangements, we can also see why it is hard to account for the origin of such states by spontaneous processes in the scope of the observable universe. (Of course, since it is as a rule very inconvenient to work in terms of statistical weights of macrostates [i.e W], we instead move to entropy, through s = k ln W or Gibbs' more complex formulation. Part of how this is done can be seen by imagining a system in which there are W ways accessible, and imagining a partition into parts 1 and 2. W = W1*W2, as for each arrangement in 1 all accessible arrangements in 2 are possible and vice versa, but it is far more convenient to have an additive measure, i.e we need to go to logs. The constant of proportionality, k, is the famous Boltzmann constant and is in effect the universal gas constant, R, on a per molecule basis, i.e we divide R by the Avogadro Number, NA, to get: k = R/NA. The two approaches to entropy, by Clausius, and Boltzmann, of course, correspond. In real-world systems of any significant scale, the relative statistical weights are usually so disproportionate, that the classical observation that entropy naturally tends to increase, is readily apparent.) Third, the diffusion model is a LINEAR space, a string structure. This allows us to look at strings thermodynamically and statistically. Without losing force on the basic issue, let us consider the simplest case, equiprobability of position, with an alphabet of two possibilities [B vs. W balls]. Here, we see that special arrangements that may reflect strong order or organisation are vastly rarer in the set of possibilities than those that are near the peak of this distribution. For 1,000 balls, half B and half W, the peak is obviously going to be with the balls spread out in such a way that the next ball has 50-50 odds of being B or W, maximum uncertainty. Now, let us follow L K Nash and Mandl, and go to a string of 1,000 coins or a string of paramagnetic elements in a weak field. (The latter demonstrates that the coin-string model is physically relevant.) We now have binary elements, and a binomial distribution with a field of binary digits, so we know there are 1.07 *10^301 possibilities from 000 . . . 0 to 111 . . . 1 inclusive. But if we cluster possibilities by proportions that are H and T, we see that there is a sharp peak near 500:500, and that by contrast there are much fewer possibilities as we approach 1,000:0 or 0:1,000. At the extremes, as the coins are identical, there is but one way each. Likewise for alternating H, T -- a special arrangement, there are just two ways, H first, T first. We now see how order accords with compressibility of description -- more or less, algorithmic compressibility. To pick up one of the "typical" values near the peak, we essentially need to cite the string, while for the extremes, we need only give a brief description. this was Orgel's point on info capacity being correlated with length of description string. Now, as we know Trevors and Abel in 2004 pointed out that code-bearing strings [or aperiodic functional ones otherwise] will resist compressibility but will be more compressible than the utterly flat random cases. This defines an island of function. And we see that this is because any code or functionally specific string will naturally have in it some redundancy, there will not be a 50-50 even distribution in all cases. There is a statistically dominant cluster, utterly overwhelmingly dominant, near 500-500 in no particular pattern or organised functional message-bearing framework. We can now come back to the entropy view, the peak is the high entropy, low information case. That is, if we imagine some nano-bots that can rearrange coin patterns, if they act at random, they will utterly likely produce the near-500-500 no particular order result. But now, if we instruct them with a short algorithm, they can construct all H or all T, or we can give them instructions to do HT-HT . . . etc. Or, we can feed in ASCII code or some other description language based information. It is conceivable that the robots could generate such codes by chance, but the degree of isolation in the space of possibilities is such that effectively these are unobservable on the scale of the observed cosmos. As, a blind random search of the space of possibilities will be maximally unlikely to hit on the highly informational patterns. It does not matter if we were to boost the robot energy levels and speed them up to a maximum reasonable speed, that of molecular interactions and it does not matter if in effect the 10^80 atoms of the observed cosmos were given con strings and robots to flip so the strings could be flipped and read 10^12 - 14 times per s for 10^17s. Which is the sort of gamut we have available. We can confidently infer that if we see a string of 1,000 coins in a meaningful ordered or organised pattern, they were put that way by intelligently directed work, based on information. By direct import of the statistical thermodynamic reasoning we have been using. That is, we here see the basis for the confident and reliable inference to design on seeing FSCO/I. Going further, we can see that codes includes descriptions of functional organisation as per AutoCAD etc, and that such can specify any 3-d organisation of components that is functional. Where also, we can readily follow the instructions using a von Neumann universal constructor facility [make it to be self replicating and done, too] and test for observable function. Vary the instructions at random, and we soon enough see where the limits of an island of function are as function ceases. Alternatively, we can start with a random string, and then allow our nanobots to assemble. If something works, we preserve and allow further incremental, random change. That is, we have -- as a thought exercise -- an evolutionary informatics model. And, we have seen how discussion on strings is without loss of generality, as strings can describe anything else of relevance and such descriptions can be actualised as 3-d entities through a universal constructor. Which can be self-replicating, thus the test extends to evolution. (And yes, this also points tot he issue of the informational description of the universal constructor and self replication facility as the first threshold to be passed. Nor is this just a mind-game, the living cell is exactly this sort of thing, through perhaps not yet a full bore universal constructor. [Give us a couple of hundred years to figure that out and we will likely have nanobot swarms that will be just that!]) The inference at this point is obvious: by the utter dominance of non-functional configurations, 500 - 1,000 bits of information is a generous estimate of the upper limit for blind mechanisms to find functional forms. This then extends directly into looking at the genome and to the string length of proteins as an index of find-ability, thence the evaluation of plausibility of origin of life and body plan level macro-evo models. Origin of life by blind chance and/or mechanical necessity it utterly implausible. Minimal genomes are credibly 100 - 1,000 k bases, corresponding to about 100 times the size of the upper threshold. Origin of major body plans, similarly, reasonably requires some 10 - 100+ mn new bases. We are now 10 - 100 thousand times the threshold. Inference: the FSCO/I in first cell based life is there by design. Likewise that in novel body plans up to our own. And, such is rooted in the informational context of such life. kairosfocus
PS: Brillouin, again from my note:
How is it possible to formulate a scientific theory of information? The first requirement is to start from a precise definition. . . . . We consider a problem involving a certain number of possible answers, if we have no special information on the actual situation. When we happen to be in possession of some information on the problem, the number of possible answers is reduced, and complete information may even leave us with only one possible answer. Information is a function of the ratio of the number of possible answers before and after, and we choose a logarithmic law in order to insure additivity of the information contained in independent situations . . . . Physics enters the picture when we discover a remarkable likeness between information and entropy. This similarity was noticed long ago by L. Szilard, in an old paper of 1929, which was the forerunner of the present theory. In this paper, Szilard was really pioneering in the unknown territory which we are now exploring in all directions. He investigated the problem of Maxwell's demon, and this is one of the important subjects discussed in this book. The connection between information and entropy was rediscovered by C. Shannon in a different class of problems, and we devote many chapters to this comparison. We prove that information must be considered as a negative term in the entropy of a system; in short, information is negentropy. The entropy of a physical system has often been described as a measure of randomness in the structure of the system. We can now state this result in a slightly different way: Every physical system is incompletely defined. We only know the values of some macroscopic variables, and we are unable to specify the exact positions and velocities of all the molecules contained in a system. We have only scanty, partial information on the system, and most of the information on the detailed structure is missing. Entropy measures the lack of information; it gives us the total amount of missing information on the ultramicroscopic structure of the system. This point of view is defined as the negentropy principle of information, and it leads directly to a generalization of the second principle of thermodynamics, since entropy and information must, be discussed together and cannot be treated separately. This negentropy principle of information will be justified by a variety of examples ranging from theoretical physics to everyday life. The essential point is to show that any observation or experiment made on a physical system automatically results in an increase of the entropy of the laboratory. It is then possible to compare the loss of negentropy (increase of entropy) with the amount of information obtained. The efficiency of an experiment can be defined as the ratio of information obtained to the associated increase in entropy. This efficiency is always smaller than unity, according to the generalized Carnot principle. Examples show that the efficiency can be nearly unity in some special examples, but may also be extremely low in other cases. This line of discussion is very useful in a comparison of fundamental experiments used in science, more particularly in physics. It leads to a new investigation of the efficiency of different methods of observation, as well as their accuracy and reliability . . . . [ Science and Information Theory, Second Edition; 1962. From an online excerpt of the Dover Reprint edition . . . ]
kairosfocus
Mung, yup. The pivotal issue is to make the conceptual leap to see how information enters the picture: a puzzle posed when the Shannon average info per symbol metric took on a suspiciously familiar shape, c. 1948 . . . and there was a discussion that ended up with yup we call it "[information] entropy." A decade later, Jaynes was pointing a way, and so was Brillouin with his "negentropy" view; notice the negative of entropy value that just "pops out" in my derivation snippet above. And Harry S Robertson is truly awesome indeed. Believe it or not, I did not know what was in it (the ID debates were not in my ken at that time), it just looked like a good Thermo-D book when I bought it. KF kairosfocus
Gordon Davisson:
Another reason this sort of information doesn’t have much to do with what most people think of as “information” is that it’s information about the microscopic state of the system, and that’s not something most people are concerned with. They don’t really care exactly where each nitrogen and oxygen molecule is in the air around them, but that’s the sort of information we’re talking about.
It's the same sort of information that people are accustomed to in everyday life, such as how many yes/no questions on average does it take in a game of twenty questions. Mung
Gordon Davisson:
From the ID point of view, there’s an even bigger problem: since it’s related to thermodynamics, and thermodynamics is mostly about heat and energy… this sort of information is also mostly about heat and energy.
The correct way to look at it, as explained by Ben-Naim, is that entropy is a special case of the Shannon measure. So not all information measures are necessarily thermodynamic in their application. The "problem" then dissipates. The Shannon measure applies to any probability distribution, whereas thermodynamic entropy does not, as it is only applicable for certain specific distributions. Mung
kairosfocus:
Thermodynamic entropy turns out to have a feasible interpretation as missing info to specify micro-state [particular cell in phase space] given macro-observable state.
What's more, this interpretation of entropy has nothing to do with order or disorder and it is easily shown that the order/disorder interpretation is in fact misleading. Notice Gordon's use of "generally" in his post. Mung
As to the title of the OP... News, I'm not even certain that it has been shown that chemistry can arise naturally from physics! Mung
PS: Durston et al, 2007, again using my note:
Abel and Trevors have delineated three qualitative aspects of linear digital sequence complexity [2,3], Random Sequence Complexity (RSC), Ordered Sequence Complexity (OSC) and Functional Sequence Complexity (FSC). RSC corresponds to stochastic ensembles with minimal physicochemical bias and little or no tendency toward functional free-energy binding. OSC is usually patterned either by the natural regularities described by physical laws or by statistically weighted means. For example, a physico-chemical self-ordering tendency creates redundant patterns such as highly-patterned polysaccharides and the polyadenosines adsorbed onto montmorillonite [4]. Repeating motifs, with or without biofunction, result in observed OSC in nucleic acid sequences. The redundancy in OSC can, in principle, be compressed by an algorithm shorter than the sequence itself. As Abel and Trevors have pointed out, neither RSC nor OSC, or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life [5]. FSC includes the dimension of functionality [2,3]. Szostak [6] argued that neither Shannon's original measure of uncertainty [7] nor the measure of algorithmic complexity [8] are sufficient. Shannon's classical information theory does not consider the meaning, or function, of a message. Algorithmic complexity fails to account for the observation that 'different molecular structures may be functionally equivalent'. For this reason, Szostak suggested that a new measure of information–functional information–is required [6] . . . . Shannon uncertainty, however, can be extended to measure the joint variable (X, F), where X represents the variability of data, and F functionality. This explicitly incorporates empirical knowledge of metabolic function into the measure that is usually important for evaluating sequence complexity. This measure of both the observed data and a conceptual variable of function jointly can be called Functional Uncertainty (Hf) [17], and is defined by the equation: H(Xf(t)) = -[SUM]P(Xf(t)) logP(Xf(t)) . . . (1) where Xf denotes the conditional variable of the given sequence data (X) on the described biological function f which is an outcome of the variable (F). For example, a set of 2,442 aligned sequences of proteins belonging to the ubiquitin protein family (used in the experiment later) can be assumed to satisfy the same specified function f, where f might represent the known 3-D structure of the ubiquitin protein family, or some other function common to ubiquitin. The entire set of aligned sequences that satisfies that function, therefore, constitutes the outcomes of Xf. Here, functionality relates to the whole protein family which can be inputted from a database . . . . In our approach, we leave the specific defined meaning of functionality as an input to the application, in reference to the whole sequence family. It may represent a particular domain, or the whole protein structure, or any specified function with respect to the cell. Mathematically, it is defined precisely as an outcome of a discrete-valued variable, denoted as F={f}. The set of outcomes can be thought of as specified biological states. They are presumed non-overlapping, but can be extended to be fuzzy elements . . . Biological function is mostly, though not entirely determined by the organism's genetic instructions [24-26]. The function could theoretically arise stochastically through mutational changes coupled with selection pressure, or through human experimenter involvement [13-15] . . . . The ground state g (an outcome of F) of a system is the state of presumed highest uncertainty (not necessarily equally probable) permitted by the constraints of the physical system, when no specified biological function is required or present. Certain physical systems may constrain the number of options in the ground state so that not all possible sequences are equally probable [27]. An example of a highly constrained ground state resulting in a highly ordered sequence occurs when the phosphorimidazolide of adenosine is added daily to a decameric primer bound to montmorillonite clay, producing a perfectly ordered, 50-mer sequence of polyadenosine [3]. In this case, the ground state permits only one single possible sequence . . . . The null state, a possible outcome of F denoted as ø, is defined here as a special case of the ground state of highest uncertainly when the physical system imposes no constraints at all, resulting in the equi-probability of all possible sequences or options. Such sequencing has been called "dynamically inert, dynamically decoupled, or dynamically incoherent" [28,29]. For example, the ground state of a 300 amino acid protein family can be represented by a completely random 300 amino acid sequence where functional constraints have been loosened such that any of the 20 amino acids will suffice at any of the 300 sites. From Eqn. (1) the functional uncertainty of the null state is represented as H(Xø(ti))= - [SUM]P(Xø(ti)) log P(Xø(ti)) . . . (3) where (Xø(ti)) is the conditional variable for all possible equiprobable sequences. Consider the number of all possible sequences is denoted by W. Letting the length of each sequence be denoted by N and the number of possible options at each site in the sequence be denoted by m, W = mN. For example, for a protein of length N = 257 and assuming that the number of possible options at each site is m = 20, W = 20257. Since, for the null state, we are requiring that there are no constraints and all possible sequences are equally probable, P(Xø(ti)) = 1/W and H(Xø(ti))= - [SUM](1/W) log (1/W) = log W . . . (4) The change in functional uncertainty from the null state is, therefore, ?H(Xø(ti), Xf(tj)) = log (W) - H(Xf(ti)). (5) . . . . The measure of Functional Sequence Complexity, denoted as ? [zeta], is defined as the change in functional uncertainty from the ground state H(Xg(ti)) to the functional state H(Xf(ti)), or [zeta] ? = ?H (Xg(ti), Xf(tj)) . . . (6) The resulting unit of measure is defined on the joint data and functionality variable, which we call Fits (or Functional bits). The unit Fit thus defined is related to the intuitive concept of functional information, including genetic instruction and, thus, provides an important distinction between functional information and Shannon information [6,32]. Eqn. (6) describes a measure to calculate the functional information of the whole molecule, that is, with respect to the functionality of the protein considered. The functionality of the protein can be known and is consistent with the whole protein family, given as inputs from the database. However, the functionality of a sub-sequence or particular sites of a molecule can be substantially different [12]. The functionality of a sub-molecule, though clearly extremely important, has to be identified and discovered . . . . To avoid the complication of considering functionality at the sub-molecular level, we crudely assume that each site in a molecule, when calculated to have a high measure of FSC, correlates with the functionality of the whole molecule. The measure of FSC of the whole molecule, is then the total sum of the measured FSC for each site in the aligned sequences. Consider that there are usually only 20 different amino acids possible per site for proteins, Eqn. (6) can be used to calculate a maximum Fit value/protein amino acid site of 4.32 Fits/site [NB: Log2 (20) = 4.32]. We use the formula log (20) - H(Xf) to calculate the functional information at a site specified by the variable Xf such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f. The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability, in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 1049 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space. A high Fit value for individual sites within a protein indicates sites that require a high degree of functional information. High Fit values may also point to the key structural or binding sites within the overall 3-D structure.
kairosfocus
GD, Consider polymers as strings, e.g. AA and D/RNA. These strings exist in a config space of possibilities which has in it islands of function. The function is observable, as is the non function, per the cell. Thus, we can speak to issues of observability, configuration, function, search spaces etc, and we can go on to assess what Durston et al did in their 2007 paper, look at null, ground, functional states and information content, including the degree of freedom warranted per empirical observation of functional proteins. In short, Jaynes et al are not using an unusual view, they are seeing the connexion between the entropy that cropped up naturally in information theory and what lies behind the entropy developed earlier for thermodynamics. Thermodynamic entropy turns out to have a feasible interpretation as missing info to specify micro-state [particular cell in phase space] given macro-observable state. These then allow us to look at functional states in AA and D/RNA strings at first level. Note the following extended clip from Durston et al 2007. Going on, we can note that strings extend to much broader cases, as functional organisation is reducible to strings in a description language, as with AutoCAD etc. So, discussion on strings can cover functional organisation in general, and leads to a configuration space, island of function analysis. (Roughly, a config space is a phase space where we are not interested in momentum issues.) The result is much as you would expect, non-functional states form a vast sea, and for things relevant to FSCO/I, we have deeply isolated islands of function. Thus, we see the sol system and/or observed cosmos scale search challenge and why 500 - 1,000 its is a conservative threshold. At upper end, ponder 10^80 atoms working as observers and going at 10^12 - 10^14/s observations for 10^17 s, vs a space from 0000 . . . 0 to 1111 . . . 1 with 1.07*10^301 cells to be blindly searched. Where, suggested "golden" searches face the challenge that a search is a subset so for a set of n members the seat of possible searches comes from the power set, of magnitude 2^n. We see n of order 10^301 already. So, the issue is forming the concepts and applying them properly. Sure, cooling down a thermodynamic system drastically reduces missing info, e.g. freezing locks things down to an orderly pattern. That does not undercut the force of the issue we are looking at, especially when one ponders a Darwin's pond or the like as a pre-life context. KF kairosfocus
Kairosfocus, I basically agree with the Jaynes view of the link between information theory and thermodynamics, and I've been aware of it for quite a long time. But as I said in my first comment here, it "requires you to use a very unusual definition of 'information', and one that’s pretty irrelevant to the usual information-based arguments for ID". Specifically, it identifies thermodynamic entropy with the amount of information missing from a macroscopic description of a system's state vs the complete information in a microscopically-detailed description of the system's state. IMO this is a perfectly legitimate way to think about thermodynamic entropy, but it doesn't have much to do with how most people think about information, nor (as far as I can see) have much to with the sorts of information that ID concerns itself with. One of the big disconnects between this sort of information and what ID is concerned with is precisely the order vs. organization distinction -- entropy (both thermodynamic and Shannon) is about order and disorder, not organization. Maximum entropy generally corresponds to the data (information) or system (thermo) being maximally disordered and random. Minimum entropy generally corresponds to it being maximally ordered. Organized systems generally have intermediate entropy. Here's something I posted a while ago on the subject:
To clarify the difference between organization, order, and disorder, let me draw on David Abel and Jack Trevors’ paper, “Three subsets of sequence complexity and their relevance to biopolymeric information” (published in Theoretical Biology and Medical Modelling 2005, 2:29). Actually, I’ll mostly draw on their Figure 4, which tries to diagram the relationships between a number of different types of (genetic) sequence complexity — random sequence complexity (RSC — roughly corresponding to disorder), ordered (OSC), and functional (FSC — roughly corresponging to organization). What I’m interested in here is the ordered-vs-random axis (horizontal on the graph), and functional axis (Y2/vertical on the graph). I’ll ignore the algorithmic compressibility axis (Y1 on the graph). Please take a look at the graph before continuing… I’ll wait… Back? Good, now, the point I want to make is that the connection between thermal and information entropy only relates to the horizontal (ordered-vs-random) axis, not the vertical (functional, or organizational) axis. The point of minimum entropy is at the left-side bottom of the graph, corresponding to pure order. The point of maximum entropy is at the right-side bottom of the graph, corresponding to pure randomness. The functional/ordered region is in between those, and will have intermediate entropy.
See my full earlier comment for examples and more details of my view on this. Another reason this sort of information doesn't have much to do with what most people think of as "information" is that it's information about the microscopic state of the system, and that's not something most people are concerned with. They don't really care exactly where each nitrogen and oxygen molecule is in the air around them, but that's the sort of information we're talking about. From the ID point of view, there's an even bigger problem: since it's related to thermodynamics, and thermodynamics is mostly about heat and energy... this sort of information is also mostly about heat and energy. Above, I quoted the example of cooling 1 cc of water off by 1° C, which decreases the water's thermodynamic entropy by 3.33e-3 cal/K, and (using this definition of information) corresponds to an information gain of 1.46e21 bits. As far as I can see, a definition of information where a huge quantity of information can be produced simply by cooling something off should be anathema to an ID argument. You can argue for this definition if you want, but I don't see how you can use it to argue for ID; it looks, if it's relevant at all, like a huge argument against ID. (Note: I'm not claiming it actually is an argument against ID; I think it's basically irrelevant. But if you think it's relevant, you have to explain why it doesn't undermine your case.) Gordon Davisson
If you have an isolated system consisting of a single particle in a volume separated into equal sub-volumes by a barrier A|B and you remove the barrier, which way does the information flow? Frankly, immho, that's just silly talk. Our uncertainty as to the location of the particle increased. Did information flow out of our brain into the surrounding universe? There was no flow of information, there was only an increase in uncertainty. Shannon's measure is probabilistic. So is thermodynamics. Flow of information has nothing to do with it. Mung
Gordon Davisson:
And as I said in my first comment on this post, entropy decreases are common and unremarkable.
Ben-Naim shreds popular science writing on entropy. He finds their comments remarkable. :) Information, Entropy, Life and the Universe: What We Know and What We Do Not Know The Briefest History of Time: The History of Histories of Time and the Misconstrued Association between Entropy and Time Mung
kairosfocus: Summarising Harry Robertson’s Statistical Thermophysics (Prentice-Hall International, 1993) Awesome book. Mung
PS: I should add, that in discussing what I have descriptively summarised as functionally specific complex organisation and/or associated information [= FSCO/I for handy short], I recognised long since that we can identify description languages that allow us to specify the parts, orientation, coupling and overall arrangement, much as AutoCAD etc do. Thus, in effect we can measure the information content of an organised system that depends on specific functional organisation of parts, trough such a reduction to a structured string of y/n questions. Redundancies in arrangements can then be addressed through don't care bits. This means that discussion on s-t-r-i-n-g-s is WLOG. Orgel and Wicken between them long since recognised this, cf Orgel in 1973 and Wicken in 1979. Wicken, 1979:
‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]
Orgel, 1973:
. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . . [HT, Mung, fr. p. 190 & 196 (and now also HT Amazon second hand books):] These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [--> this is of course equivalent to the string of yes/no questions required to specify the relevant "wiring diagram" for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here, here and here (with here on self-moved agents as designing causes).] One can see intuitively that many instructions are needed to specify a complex structure. [--> so if the q's to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions. [--> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes. [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196. Of course, that immediately highlights OOL, where the required self-replicating entity is part of what has to be explained (cf. Paley here), a notorious conundrum for advocates of evolutionary materialism; one, that has led to mutual ruin documented by Shapiro and Orgel between metabolism first and genes first schools of thought, cf here. Behe would go on to point out that irreducibly complex structures are not credibly formed by incremental evolutionary processes and Menuge et al would bring up serious issues for the suggested exaptation alternative, cf. his challenges C1 - 5 in the just linked. Finally, Dembski highlights that CSI comes in deeply isolated islands T in much larger configuration spaces W, for biological systems functional islands. That puts up serious questions for origin of dozens of body plans reasonably requiring some 10 - 100+ mn bases of fresh genetic information to account for cell types, tissues, organs and multiple coherently integrated systems. Wicken's remarks a few years later as already were cited now take on fuller force in light of the further points from Orgel at pp. 190 and 196 . . . ]
kairosfocus
GD, While I have little interest in who said what when (other than, that the negentropy view did arise from the discussions on highly suggestive similarities of mathematics from the outset), I have already pointed out above, from Wiki (via my always linked briefing note) -- as an admission against known interest -- on information-entropy links beyond mere similarity of equations (as in there is is an informational school of thought on entropy, cf Harry S Robertson's Statistical Thermophysics for a discussion), a school that seems to at minimum have a serious point:
. . . we may average the information per symbol in the communication system thusly (giving in terms of -H to make the additive relationships clearer): - H = p1 log p1 + p2 log p2 + . . . + pn log pn or, H = - SUM [pi log pi] . . . Eqn 5 H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: "it is often referred to as the entropy of the source." [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1 below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] in its article on Informational Entropy (aka Shannon Information, cf also here):
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
Summarising Harry Robertson's Statistical Thermophysics (Prentice-Hall International, 1993) -- excerpting desperately and adding emphases and explanatory comments, we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.) For, as he astutely observes on pp. vii - viii:
. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .
And, in more details, (pp. 3 - 6, 7, 36, cf Appendix 1 below for a more detailed development of thermodynamics issues and their tie-in with the inference to design; also see recent ArXiv papers by Duncan and Samura here and here):
. . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . . [deriving informational entropy, cf. discussions here, here, here, here and here; also Sarfati's discussion of debates and the issue of open systems here . . . ] H({pi}) = - C [SUM over i] pi*ln pi, [. . . "my" Eqn 6] [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp - beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . . [H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . . Jayne's [summary rebuttal to a typical objection] is ". . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly 'objective' quantity . . . it is a function of [those variables] and does not depend on anybody's personality. There is no reason why it cannot be measured in the laboratory." . . . . [pp. 3 - 6, 7, 36; replacing Robertson's use of S for Informational Entropy with the more standard H.]
As is discussed briefly in Appendix 1, Thaxton, Bradley and Olsen [TBO], following Brillouin et al, in the 1984 foundational work for the modern Design Theory, The Mystery of Life's Origins [TMLO], exploit this information-entropy link, through the idea of moving from a random to a known microscopic configuration in the creation of the bio-functional polymers of life, and then -- again following Brillouin -- identify a quantitative information metric for the information of polymer molecules. For, in moving from a random to a functional molecule, we have in effect an objective, observable increment in information about the molecule. This leads to energy constraints, thence to a calculable concentration of such molecules in suggested, generously "plausible" primordial "soups." In effect, so unfavourable is the resulting thermodynamic balance, that the concentrations of the individual functional molecules in such a prebiotic soup are arguably so small as to be negligibly different from zero on a planet-wide scale. By many orders of magnitude, we don't get to even one molecule each of the required polymers per planet, much less bringing them together in the required proximity for them to work together as the molecular machinery of life. The linked chapter gives the details. More modern analyses [e.g. Trevors and Abel, here and here], however, tend to speak directly in terms of information and probabilities rather than the more arcane world of classical and statistical thermodynamics, so let us now return to that focus . . .
I trust this provides a trigger for starting a re-think on the links between entropy and information i/l/o fairly recent work and thought. KF kairosfocus
Rob, thanks for coming in. But I don't think you've addressed my objections. In fact, if anything you've made it worse. Aside from continuing to push Granville Sewell (whom I've refuted here berfore), you said:
Second, entropy increase = information decrease; entropy decrease=information increase. This is not my definition, it’s Claude Shannon’s definition of information in 1940, while the definition of entropy was coined closer to 1840.
As I pointed out when you made this claim before:
First, a minor quibble: correct me if I’m wrong, but I don’t think Shannon ever used this definition. I’m not an expert on the history here, but Norbert Weiner was the first I know of to identify information with a decrease in Shannon entropy, but not thermodynamic entropy. AIUI Leon Brillouin was the first to identify it with negative thermodynamic entropy.
And as I said in my first comment on this post, entropy decreases are common and unremarkable. If you couple that with your claim about the relation between entropy and information, leads to some really silly conclusions. Here's the example I gave in that previous discussion:
But it’s even worse than that, because according to this definition, simply cooling something off increases its information by a huge amount. Consider cooling one cc (about a thimblefull) of water off by one degree Centigrade (=1.8 degrees Fahrenheit) from, say, 27° C to 26° C (absolute temperatures of 300.15 K and 299.15 K respectively). The amount of heat removed is (almost by definition) about 1 calorie, so ?Q = -1 cal, and ?Information = -?Q/T ~= – (-1 cal) / 300 K = +3.33e-3 cal/K. To convert that from thermodynamic units to information units (bits), we need to divide by Boltzmann’s constant times the natural log of 2; that’s k_B * ln(2) = 3.298e-24 cal/K * 0.6931 = 2.286e-24 cal/K. Dividing that into the entropy change we got above gives an information increase of …wait for it… 1.46e21 bits. That’s over a thousand billion billion bits of information just because a little water cooled off slightly. I’m going to go ahead and claim that this definition has almost nothing to do with what most people mean by “information”.
You never responded. And you haven't done anything to defend your original claim that Darwinian evolution is unique in decreasing entropy (or "reverses entropy-flow", if that means something different). Furthermore, since that exchange you and I got into another discussion on pretty much this same subject; here's my summary at the end:
– I argued that your criticism of Jeremy England’s work was mistaken, and based on conflating different definitions of “information”. You don’t seem to have responded. – I refuted your claim that “Darwinian physicists refuse to calculate” the relevant information flows, and showed that the actual information flow is far beyond anything evolution might require. No response. (Note: I suppose you could point out that I’m not actually a physicist, and therefore don’t qualify as a “Darwinian physicist”, but that would be quibbling. Dr. Emory F. Bunn — an actual physicist — has done a very similar calculation, just without the conversion to information units.) – I pointed out that you’d failed to convert Sir Fred Hoyle’s probability figure into the appropriate form before you cited it as information. Your response, apparently, is that I should read Hoyle so we’ll “have something to talk about beyond dimensional analysis”. Why should I bother? In the first place, I don’t need to know anything about the details of his calculation to know how to convert it to information units (or to see that you didn’t do the conversion). In the second place, he may have believed in evolution, but he doesn’t seem to have understood it at all well; therefore I doubt his calculations have any actual relevance. – Finally, I asked for a more specific reference to the “information flow” calculation you said Granville Sewell had done, and your reply seems to be “Sewell ~2010”. That’s not more specific.
You didn't respond any further. Are you going to respond seriously this time? Gordon Davisson
kf, you may appreciate this excerpt from Marshall's book:
Perry Marshall, Evolution 2.0, page 153: Wanna Build a Cell? A DVD Player Might Be Easier Imagine that you’re building the world’s first DVD player. What must you have before you can turn it on and watch a movie for the first time? A DVD. How do you get a DVD? You need a DVD recorder first. How do you make a DVD recorder? First you have to define the language. When Russell Kirsch (who we met in chapter Cool created the world’s first digital image, he had to define a language for images first. Likewise you have to define the language that gets written on the DVD, then build hardware that speaks that language. Language must be defined first. Our DVD recorder/player problem is an encoding-decoding problem, just like the information in DNA. You’ll recall that communication, by definition, requires four things to exist: 1. A code 2. An encoder that obeys the rules of a code 3. A message that obeys the rules of the code 4. A decoder that obeys the rules of the code These four things—language, transmitter of language, message, and receiver of language—all have to be precisely defined in advance before any form of communication can be possible at all. A camera sends a signal to a DVD recorder, which records a DVD. The DVD player reads the DVD and converts it to a TV signal. This is conceptually identical to DNA translation. The only difference is that we don’t know how the original signal—the pattern in the first DNA strand—was encoded. The first DNA strand had to contain a plan to build something, and that plan had to get there somehow. An original encoder that translates the idea of an organism into instructions to build the organism (analogous to the camera) is directly implied. The rules of any communication system are always defined in advance by a process of deliberate choices. There must be prearranged agreement between sender and receiver, otherwise communication is impossible. By definition, a communication system cannot evolve from something simpler because evolution itself requires communication to exist first. You can’t make copies of a message without the message, and you can’t create a message without first having a language. And before that, you need intent. A code is an abstract, immaterial, nonphysical set of rules. There is no physical law that says ink on a piece of paper formed in the shape T-R-E-E should correspond to that large leafy organism in your front yard. You cannot derive the local rules of a code from the laws of physics, because hard physical laws necessarily exclude choice. On the other hand, the coder decides whether “1” means “on” or “off.” She decides whether “0” means “off” or “on.” Codes, by definition, are freely chosen. The rules of the code come before all else. These rules of any language are chosen with a goal in mind: communication, which is always driven by intent.
bornagain77
RS, good point, an expert on differential equations is -- in that context! -- asking pointed questions about entropy, info flow, organisation and more. My own thoughts start from what happens when raw energy flows into a system as heat or the like. That it may exhaust some heat to a lower temperature reservoir is a secondary matter. And Clausius' heat flow situation that gives rise to the net entropy rise conclusion is based on such an inflow. GD needs to answer to the issue of increased statistical weight for the resulting macrostate. KF PS: Then, there is the inconvenient little point that DNA has coded, in effect alphabetic text expressing algorithmic steps in it, in the heart of cell based life. A point originally put on the table by Crick, certainly by March 19, 1953. This is a LINGUISTIC phenomenon. kairosfocus
Thanks BA77 for saving me from having to post all those references. If you want to understand entropy, there is just one "must read" article and its written by a mathematician, Granville Sewell. I have my beef with mathematicians doing physics, but the one thing they never do is fudge the numbers. Only after you've read Sewell can we start to talk about entropy, Earth, closed systems and the like. Otherwise I'm wasting my breath. Second, entropy increase = information decrease; entropy decrease=information increase. This is not my definition, it's Claude Shannon's definition of information in 1940, while the definition of entropy was coined closer to 1840. And finally, there really has never been an answer to the objection that Darwinism reverses entropy-flow. Oh there have been many attempts to shut down debate, like global warming, but that doesn't constitute an explanation. A list of scientific book-length objections to Darwinism just crossed my desk, which contained over 50 titles starting in 1870 and continuing to the present. And while I cannot compete with GD for superlatives, those books are the awesomest, indisputabilest, cosmic evidence that the objection raised early and often, has not gone away. But as I said before, read Granville. Then we can talk. Robert Sheldon
TWSYF: actually, is that set of internalised mouth-noises the illusion termed "you" throws up anything more than just that: noise, full of sound and fury, signifying nothing . . . even as it pretends to dress up in a lab coat and to be a justification for atheistical evolutionary materialism? And if that illusion "you" thinks -- whatever further illusion that is -- so, on what basis traceable to nothing but blind chance and necessity playing meaninglessly on matter through energy? KF kairosfocus
Wishful thinking, speculation, and faith. The new tag line for atheism. Truth Will Set You Free
I took some time to summarize BA77's argument with my own editorial insertions for impact. I hope I've done it justice. Feel free to comment, revise, etc. - and attempts to refute it are always welcome: Entropy What materialists never tell us, in contradiction to their claim that the earth is an open-system where the entry of energy permits a reduction in entropy, is that the energy allowed to enter the atmosphere of the Earth is constrained, i.e. finely-tuned, to 1 trillionth of a trillionth of the entire electromagnetic spectrum. This fine-tuning of the size of light’s wavelengths and the constraints on the size allowable for the protein molecules of organic life, strongly indicate that they were tailor-made for each other – and obviously inexplicable in the materialist story-line. Moreover, the light coming from the sun must be of the ‘right color’ – another finely-tuned aspect to support life. And even with these highly constrained factors, it still does not fully negate the disordering effects of pouring raw energy into an open system. This is made evident by the fact that objects left out in the sun age and deteriorate much more quickly than objects stored inside in cool conditions, away from the sun and heat. Instead of reducing entropy, just pouring raw energy into a ‘open system’ actually increases the disorder of the system. Again, we hear no argument thus far from materialists on these matters. To offset this disordering effect that the raw energy of sunlight has on objects, the raw energy from the sun must be processed further to be of biological utility. This is accomplished by photosynthesis which converts sunlight into ATP. Photosynthesis is an amazingly complex and dynamic process which involves about 100 proteins that are highly ordered within the photosynthetic membranes of the cell, thus further conflicting with the simplistic claim that an open-system will decrease entropy. Again, materialists ignore this and don't even attempt to explain it. The ability to do photosynthesis is widely distributed throughout the bacterial domain in six different phyla, and no one can point to pattern of evolution as its origin and source. This is simply another example of the complete ignorance offered from the evolutionary community. Addtionally, scientists are in disagreement over what came first — replication, or metabolism, or energy. You need enzymes to make ATP and you need ATP to make enzymes. The question is: where did energy come from before either of these two things existed? Nobody is holding their breath for a Darwinist response. Photosynthesis itself is finely-tuned as an entropy-reducer as results showed a 100% free-energy transduction efficiency and a tight mechanochemical coupling of F1-ATPase. There was no expectation that this would be the case since unintelligent forces are never efficient at all (and do not create finely-tuned processes). The means photosynthesis uses to achieve this astonishing efficiency in overcoming thermodynamic noise is by way of ‘quantum coherence’. Biological systems can direct a quantum process, in this case energy transport, in astoundingly subtle and controlled ways – showing remarkable resistance to the aggressive, random background noise of biology and extreme environments. Photosynthetic organisms, such as plants and some bacteria, have mastered this process: In less than a couple of trillionths of a second, 95 percent of the sunlight they absorb is whisked away to drive the metabolic reactions that provide them with energy. The efficiency of human-designed, photovoltaic cells currently on the market is around 20 percent. Again, materialists just stay silent on this. Moreover, protein folding is not a achieved by random thermodynamic jostling but is also found to be achieved by ‘quantum transition’. By conventional thinking of evolutionists, a chain of amino acids can only change from one shape to another by mechanically passing through various shapes in between. But scientists show that the process is a quantum one and that idea is entirely false. The movement of proteins in the cell also defies entropy. Almost 90 percent of DNA is covered with proteins and they are moving all the time. However, floating proteins find their targets for binding quickly as well. Scientists point out that it is counterintuitive, because one would think collisions between a protein and other molecules on DNA would slow it down. But the system defies entropy and conflicts with expectations, indicating that there is something special about the structure, about the order, inside a living cell. And indeed, in regards to quantum biology, there is much evidence confirming the fact that current biologists working under the reductive materialistic framework of Darwinian evolution are not even using the correct theoretical framework to properly understand life in the first place. They are beyond clueless. The very same materialists deny that any immaterial essences exist. Some deny that information has any empirical quality. However, it is non-material information that constrains biological life to be so far out of thermodynamic equilibrium. Information as independent of energy and matter ‘resolves the thermodynamic issues and invokes the correct paradigm for understanding the vital area of thermodynamic/organisational interactions. Information, which is immaterial, has now been experimentally shown to have a ‘thermodynamic content’. Scientists have shown that energy can be converted from units of information. Scientists in Japan succeeded in converting information into free energy. Information, entropy, and energy should be treated on equal footings. However, materialists and evolutionists have simply tried to ignore these issues. Their worldview has nothing to offer on it anyway. The quantity of information involved in keeping life so far out of thermodynamic equilibrium with the rest of the environment is enormous (4 x 10^12 bits). It’s a fundamental aspect of life – again materialists avoid mentioning this. Darwinists have many times appealed to the ‘random thermodynamic jostling’ in cells to try to say that life is not designed, (i.e. Carl Zimmer’s “barely constrained randomness” remark for one example), the fact of the matter is that if anything ever gave evidence for the supernatural design of the universe it is the initial 1 in 10^10^123 entropy of the universe. This number,1 in 10^10^123, is so large that, if it were written down in ordinary notation, it could not be written down even if you used every single particle of the universe to denote a decimal place. Yet, despite entropy’s broad explanatory scope for the universe, in the quantum zeno effect we find that “an unstable particle, if observed continuously, will never decay.” The destructive power of black holes is an example of entropy and what one should expect everywhere in the universe. However, on earth we can see in the science of the Shroud, a total lack of gravity, lack of entropy (without gravitational collapse), no time, no space—it conforms to no known law of physics. Of course, materialists have no interest in this and try to ridicule it, thus revealing their own ignorance, bias, fear and lack of wonder or interest about what reality, life and the universe is. In conclusion Thus, contrary to the claims of Darwinists that entropy presents no obstacle for Darwinian evolution, the fact of the matter is that not only is entropy not compatible with life, but entropy is found to be the primary source for death and destruction in this universe. In fact, Jesus Christ, in his defiance of gravity at his resurrection from the dead, apparently had to deal directly with the deadly force of entropy in his resurrection from the dead. Silver Asiatic
Silver Asiatic, GD states:
"Silver Asiatic, I wouldn’t be so impressed with ba77’s research unless you’ve checked it out for yourself;"
Which is an interesting statement for GD to make since we have no reason whatsoever to trust anything GD says since GD, apparently, believes his conscious thoughts are merely, and ultimately, the end results of the 'random thermodynamic jostling' of the material particles of the universe and of his brain.
"Supposing there was no intelligence behind the universe, no creative mind. In that case, nobody designed my brain for the purpose of thinking. It is merely that when the atoms inside my skull happen, for physical or chemical reasons, to arrange themselves in a certain way, this gives me, as a by-product, the sensation I call thought. But, if so, how can I trust my own thinking to be true? It's like upsetting a milk jug and hoping that the way it splashes itself will give you a map of London. But if I can't trust my own thinking, of course I can't trust the arguments leading to Atheism, and therefore have no reason to be an Atheist, or anything else. Unless I believe in God, I cannot believe in thought: so I can never use thought to disbelieve in God." - C.S. Lewis, The Case for Christianity, p. 32 “It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain, I have no reason to suppose that my beliefs are true. They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms. In order to escape from this necessity of sawing away the branch on which I am sitting, so to speak, I am compelled to believe that mind is not wholly conditioned by matter”. J. B. S. Haldane ["When I am dead," in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209. Sam Harris's Free Will: The Medial Pre-Frontal Cortex Did It - Martin Cothran - November 9, 2012 Excerpt: There is something ironic about the position of thinkers like Harris on issues like this: they claim that their position is the result of the irresistible necessity of logic (in fact, they pride themselves on their logic). Their belief is the consequent, in a ground/consequent relation between their evidence and their conclusion. But their very stated position is that any mental state -- including their position on this issue -- is the effect of a physical, not logical cause. By their own logic, it isn't logic that demands their assent to the claim that free will is an illusion, but the prior chemical state of their brains. The only condition under which we could possibly find their argument convincing is if they are not true. The claim that free will is an illusion requires the possibility that minds have the freedom to assent to a logical argument, a freedom denied by the claim itself. It is an assent that must, in order to remain logical and not physiological, presume a perspective outside the physical order. http://www.evolutionnews.org/2012/11/sam_harriss_fre066221.html (1) rationality implies a thinker in control of thoughts. (2) under materialism a thinker is an effect caused by processes in the brain (determinism). (3) in order for materialism to ground rationality a thinker (an effect) must control processes in the brain (a cause). (1)&(2) (4) no effect can control its cause. Therefore materialism cannot ground rationality. per Box UD
Thus Silver Asiatic, since GD has apparently given up rationality altogether, in his denial of the reality of his own conscious mind and his free will, then we have no reason whatsoever to trust anything that GD says about anything. We might as well ask the rustling of leaves for a coherent answer to some question rather than ask GD for one. Moreover, the 'random thermodynamic jostling' of the atoms of GD's brain further suggested that you check my cited research out further, and also claimed that the quantum zeno effect "didn’t involve conscious observation at all. It used a laser beam to put the freeze on decay." The objection raised by the random jostling of GD's brain to my cited research misses the mark on a couple of fronts. First, in order for us to 'consciously observe' the atomic world in the first place it is necessary for us to use lasers or some other sort of detector. It simply is impossible for us to 'observe' atomic particles any other way in a laboratory experiment since they are so small. Yet, just because we are forced to use detectors to 'consciously observe' the actions of the atomic world, that does not 'answer the question' as to why 'consciously observing' the atomic world, even with a detector, has such a dramatic impact on how the atomic world behaves. In the following video, at the 16:34 minute mark, the reason why detector interference does not explain quantum wave collapse is explained (i.e. observation changes the nature of what we are observing not just the activity of what we are observing):
Quantum Physics And How We Affect Reality! - video - (17:21 minute mark) https://youtu.be/REATuidImYw?t=1041
Prior to that explanation in the video, Sean Carrol, an atheistic physics professor, tried to claim that it was the detector in the double slit, as GD is currently trying to claim with the zeno effect, that was solely responsible for the weird actions of the double slit. But after the interviewer pointed out that "observation changes the nature of what we are observing not just the activity of what we are observing", Sean Carrol then backed off his original claim and honestly stated this:
'The short answer is we don't know. This is the fundamental mystery of quantum mechanics. The reason why quantum mechanics is 'difficult'. Mysteriously when we look at things we see particles, when we are not looking things are waves.' Sean Carrol
Moreover, specifically because of the 'detector objection' from atheists, I cited the 'interaction free measurement' for the quantum zeno effect. I did not cite this following 'direct interaction experiment' in which the laser directly interacted with the particles:
'Zeno effect' verified—atoms won't move while you watch October 23, 2015 by Bill Steele Excerpt: The researchers demonstrated that they were able to suppress quantum tunneling merely by observing the atoms. This so-called "Quantum Zeno effect", named for a Greek philosopher, derives from a proposal in 1977 by E. C. George Sudarshan and Baidyanath Misra at the University of Texas, Austin,, who pointed out that the weird nature of quantum measurements allows, in principle, for a quantum system to be "frozen" by repeated measurements. Previous experiments have demonstrated the Zeno Effect with the "spins" of subatomic particles. "This is the first observation of the Quantum Zeno effect by real space measurement of atomic motion," Vengalattore said. "Also, due to the high degree of control we've been able to demonstrate in our experiments, we can gradually 'tune' the manner in which we observe these atoms. Using this tuning, we've also been able to demonstrate an effect called 'emergent classicality' in this quantum system." Quantum effects fade, and atoms begin to behave as expected under classical physics. The researchers observed the atoms under a microscope by illuminating them with a separate imaging laser. A light microscope can't see individual atoms, but the imaging laser causes them to fluoresce, and the microscope captured the flashes of light. When the imaging laser was off, or turned on only dimly, the atoms tunneled freely. But as the imaging beam was made brighter and measurements made more frequently, the tunneling reduced dramatically. http://phys.org/news/2015-10-zeno-effect-verifiedatoms-wont.html
One the other hand, in the interaction free measurement experiment that I actually cited in post 18, the quantum zeno effect was 'detected without interacting with a single atom' ,,
Interaction-free measurements by quantum Zeno stabilization of ultracold atoms – 14 April 2015 Excerpt: In our experiments, we employ an ultracold gas in an unstable spin configuration, which can undergo a rapid decay. The object—realized by a laser beam—prevents this decay because of the indirect quantum Zeno effect and thus, its presence can be detected without interacting with a single atom. http://www.nature.com/ncomms/2015/150414/ncomms7811/full/ncomms7811.html?WT.ec_id=NCOMMS-20150415
The principle behind 'interaction free measurement' is much more clearly explained in the following video in which it is explained that although a detector is only at a single slit in the double slit experiment, the electron still collapses in the slit with no detector by it. i.e. Just consciously knowing the particle is not at one slit forces the wave to collapse to its particle state at the other slit that has no detector by it!
An Interaction-Free Quantum Experiment (Zeilinger Bomb Tester experiment, and in the double slit Detector is only placed at one slit during the double slit yet photon or electron still collapses in the unobserved slit) - video https://www.youtube.com/watch?v=vOv8zYla1wY
Richard Conn Henry remarks on the fallacious 'decoherence' objection of atheists
The Mental Universe - Richard Conn Henry - Professor of Physics John Hopkins University Excerpt: The only reality is mind and observations, but observations are not of things. To see the Universe as it really is, we must abandon our tendency to conceptualize observations as things.,,, Physicists shy away from the truth because the truth is so alien to everyday physics. A common way to evade the mental universe is to invoke "decoherence" - the notion that "the physical environment" is sufficient to create reality, independent of the human mind. Yet the idea that any irreversible act of amplification is necessary to collapse the wave function is known to be wrong: in "Renninger-type" experiments, the wave function is collapsed simply by your human mind seeing nothing. The universe is entirely mental,,,, The Universe is immaterial — mental and spiritual. Live, and enjoy. http://henry.pha.jhu.edu/The.mental.universe.pdf
The following video more fully explains why the 'decoherence' objection by atheists does not solve the measurement problem in Quantum Mechanics:
The Measurement Problem in quantum mechanics - (Inspiring Philosophy) - 2014 video https://www.youtube.com/watch?v=qB7d5V71vUE
Thus Silver Asiatic, although GD stated that "I wouldn’t be so impressed with ba77’s research unless you’ve checked it out for yourself”, the fact of the matter is that, since GD apparently denies the reality of his conscious mind and free will, we have no reason to trust the random jostling of GD's brain in the first place. Moreover, when we look futher at the research I cited, we find that the atoms of GD's brain (purposely?) omitted the fact that I cited an experiment in which "the quantum Zeno effect (was) detected without interacting with a single atom". In other words I did not cite the experiment where a laser directly interacted with the particles exhibiting the zeno effect as GD implied I did. Moreover, the quantum zeno effect is far from the only evidence I could have cited for conscious observation having a dramatic effect on material reality. For instance, I could have also cited this recent variation of the Wheeler Delayed Choice experiment in which it was found "That Reality Doesn’t Exist If You Are Not Looking at It":
New Mind-blowing Experiment Confirms That Reality Doesn’t Exist If You Are Not Looking at It - June 3, 2015 Excerpt: The results of the Australian scientists’ experiment, which were published in the journal Nature Physics, show that this choice is determined by the way the object is measured, which is in accordance with what quantum theory predicts. “It proves that measurement is everything. At the quantum level, reality does not exist if you are not looking at it,” said lead researcher Dr. Andrew Truscott in a press release.,,, “The atoms did not travel from A to B. It was only when they were measured at the end of the journey that their wave-like or particle-like behavior was brought into existence,” he said. Thus, this experiment adds to the validity of the quantum theory and provides new evidence to the idea that reality doesn’t exist without an observer. http://themindunleashed.org/2015/06/new-mind-blowing-experiment-confirms-that-reality-doesnt-exist-if-you-are-not-looking-at-it.html “Reality is in the observations, not in the electron.” – Paul Davies “We have become participators in the existence of the universe. We have no right to say that the past exists independent of the act of observation.” – John Wheeler
Thus all in all, as usual, I find the atoms of GD's brain to be thoroughly disingenuous to the evidence at hand. Frankly, his lack of intellectual honestly is par for the course for people who oppose ID arguments. But why should we expect any different from a 'random jostling of atoms'? Of related interest to thermodynamics and consciously observing a single photon: The following researchers are thoroughly puzzled as to how it is remotely possible for us to become consciously aware of a single photon in spite of the tremendous amount of thermodynamic noise that should prevent us from ever being able to become consciously aware of a single photon:
Study suggests humans can detect even the smallest units of light - July 21, 2016 Excerpt: Research,, has shown that humans can detect the presence of a single photon, the smallest measurable unit of light. Previous studies had established that human subjects acclimated to the dark were capable only of reporting flashes of five to seven photons.,,, it is remarkable: a photon, the smallest physical entity with quantum properties of which light consists, is interacting with a biological system consisting of billions of cells, all in a warm and wet environment," says Vaziri. "The response that the photon generates survives all the way to the level of our awareness despite the ubiquitous background noise. Any man-made detector would need to be cooled and isolated from noise to behave the same way.",,, The gathered data from more than 30,000 trials demonstrated that humans can indeed detect a single photon incident on their eye with a probability significantly above chance. "What we want to know next is how does a biological system achieve such sensitivity? How does it achieve this in the presence of noise? http://phys.org/news/2016-07-humans-smallest.html
Verse:
2 Peter 1:16 For we have not followed cunningly devised fables, when we made known unto you the power and coming of our Lord Jesus Christ, but were eyewitnesses of his majesty.
bornagain77
Origenes @ 25 Yes, I noticed that mistake in GD's comment also. He states:
Far from showing something special about consciousness, it shows that (at least in this respect) a conscious observer can be replaced by a beam of light.
But in the paper you cite:
The researchers observed the atoms under a microscope by illuminating them with a separate imaging laser.
The laser is the means used to enable the observations. Silver Asiatic
Gordon Davidson: But if you actually look at the experiment that he cites, it didn’t involve conscious observation at all. It used a laser beam to put the freeze on decay.
Where does it say that the laser beam puts a freeze on decay? Or is it your personal hypothesis that this is the case?
Graduate students Yogesh Patil and Srivatsan K. Chakram created and cooled a gas of about a billion Rubidium atoms inside a vacuum chamber and suspended the mass between laser beams. In that state the atoms arrange in an orderly lattice just as they would in a crystalline solid. But at such low temperatures, the atoms can "tunnel" from place to place in the lattice. The famous Heisenberg uncertainty principle says that the position and velocity of a particle interact. Temperature is a measure of a particle's motion. Under extreme cold velocity is almost zero, so there is a lot of flexibility in position; when you observe them, atoms are as likely to be in one place in the lattice as another. The researchers demonstrated that they were able to suppress quantum tunneling merely by observing the atoms. This so-called "Quantum Zeno effect", named for a Greek philosopher, derives from a proposal in 1977 by E. C. George Sudarshan and Baidyanath Misra at the University of Texas, Austin, who pointed out that the weird nature of quantum measurements allows, in principle, for a quantum system to be "frozen" by repeated measurements. [source: phys.org]
Origenes
Gordon Davisson Thanks for your comment. You selected one example there but it seems you're saying that "pretty much everything" BA77 posts "he’s significantly wrong about". That's a strong claim, I'd say. But in any case, he offered a number of arguments starting at post 14 - I'll select just a few: ... visible light is incredibly fine-tuned for life to exist on earth. “These specific frequencies of light (that enable plants to manufacture food and astronomers to observe the cosmos) represent less than 1 trillionth of a trillionth (10^-24) of the universe’s entire range of electromagnetic emissions.” Moreover, the light coming from the sun must be of the ‘right color’ To say that photosynthesis defies Darwinian explanation is to make a dramatic understatement: Evolutionary biology: Out of thin air John F. Allen & William Martin: The measure of the problem is here: “Oxygenetic photosynthesis involves about 100 proteins that are highly ordered within the photosynthetic membranes of the cell.” http://www.nature.com/nature/j.....5610a.html Of related note: ATP synthase is ‘unexpectedly’ found to be, thermodynamically, 100% efficient ... It goes on, so I'd be interested in learning how this information is significantly wrong. Silver Asiatic
GD, Happy new year back to you. I must observe, nope, there is the issue of the underlying information content tied to functional organisation, and the issue of work from lucky noise. I argue that on this, the oh, there is nothing to see there argument fails. Water freezing has long since been addressed in say Thaxton et al, TMLO, and before that in Orgel, 1973: ORDER is not the same as ORGANISATION. Here, crystal order is imposed by and large through the peculiarities of the polar molecule involved in water -- and the roots of that in the core laws and framing of the cosmos are themselves of interest to design thinkers and theorists at another level -- but the point is, organisation requires assembly per what Wicken 1979 identified as a wiring plan; is is aperiodic and functionally constrained, also, consistently seen to be not driven by blind chance and/or mechanical forces. The issue on the latter tied to the 2nd law, is why. The point of the statistical form of the 2nd law [the relevant underlying root issue deeply embedded in our understanding of it since over 100 years ago], is that it highlights a statistical search challenge to the point of unobservability. Once we see FSCO/I beyond 500 - 1,000 bits in accord with a functionally specific wiring plan, it is maximally implausible -- notice I specifically do not argue "improbable" -- that a sol system or observed cosmos scale blind chance and mechanical necessity search of the relevant configuration space will find a relevant island of function. And, empirically such FSCO/I is routinely seen to originate, to the trillions of cases. Consistently, it is by design, that is intelligently directed configuration. Per Newton's vera causa principle, we are entitled to infer this is the reliable cause of said phenomenon. Yes, this has momentous consequences when we contemplate the phenomena of code [thus, TEXT and LANGUAGE], algorithms and associated molecular nanotech execution machinery in the living cell. Yes, it has further sobering consequences as we see the pattern of major body plans up to our own, involving 10 - 100 mn+ bits of further genetic information, per the genome scale patterns. Yes, it is likewise when we contemplate the pattern of deeply isolated small number fold domains in AA sequence space. And more. But that is the challenge, we face complex functionally specific organisation that is information rich and not at all the same as highly compressible order driven by mechanical forces as with crystallisation. This phenomenon cries out for its own proper explanation, not for conflation with a different phenomenon. The only actually observed adequate cause is design, and the statistical underpinnings of the 2nd law, indicate a very good reason for that observation. there is something big to see and to pause over here, and it is inextricably tied to the 2nd law and its underlying statistical frame of thought. Once, we see and accept the patent distinction between order and organisation. KF PS: And laser experiments etc come about by: ______ ? kairosfocus
kairosfocus, happy new year, and thanks for your reply! I agree that heat flowing in at higher temperature and out at lower temperature is a necessary but not a sufficient condition for a heat engine (and a wide variety of other things). But it is sufficient for one important thing: it removes the second-law constraints against entropy decrease and/or free energy increase. Essentially, it doesn't mean anything interesting will happen, but it does mean that the second law doesn't forbid interesting things from happening. Or, to put it another way, if something (like an increase in FSCO/I) is forbidden on Earth, it's forbidden by something other than the second law of thermodynamics. But I'm getting a bit far from my original point. What's your take on my objections to Rob Sheldon's claims? Specifically: - Rob said that entropy increases correspond to increases in information, and entropy decreases correspond to decreases in information. I said that this is only true if you adopt an unusual and not-relevant-to-ID definition of "information". (BTW, this "unusual definition" is essentially the "missing information" view you described, but as far as I can see not closely related to CSI, FSCO/I, etc). - Rob said that evolution is unique in being an irreversible reaction in which entropy decreases; I said that such reactions are entirely common and unremarkable (although they all involve processes that couple to equal-or-larger entropy increases somewhere else). So what's your opinion on these two specific points? Do you think Rob is out to lunch, or that I'm out to lunch, or that we're both off the mark? (And if you think I'm wrong on either point, I have a followup question/challenge for you: in your understanding, when water freezes, how does its entropy change? Does that correspond to a change in information?) Gordon Davisson
Silver Asiatic, I wouldn't be so impressed with ba77's research unless you've checked it out for yourself; pretty much everything of his that I've checked out (or know about independently) he's significantly wrong about. Take the quantum Zeno effect as an example. He says "How in blue blazes can conscious observation put a freeze on entropic decay, unless consciousness was and is more foundational to reality than the 1 in 10^10^123 initial entropy is?" But if you actually look at the experiment that he cites, it didn't involve conscious observation at all. It used a laser beam to put the freeze on decay. Far from showing something special about consciousness, it shows that (at least in this respect) a conscious observer can be replaced by a beam of light. Gordon Davisson
BA77 posts 14-19, thank you for bringing all of that research together into a fascinating and comprehensive argument. Silver Asiatic
As well it is interesting to note the primary source for increasing entropy in the universe:
Entropy of the Universe – Hugh Ross – May 2010   Excerpt: Egan and Lineweaver found that supermassive black holes are the largest contributor to the observable universe’s entropy. They showed that these supermassive black holes contribute about 30 times more entropy than what the previous research teams estimated. http://www.reasons.org/entropy-universe   
Kip Thorne gets across the destructive power of black holes in a fairly dramatic fashion:
"Einstein's equation predicts that, as the astronaut reaches the singularity (of the black-hole), the tidal forces grow infinitely strong, and their chaotic oscillations become infinitely rapid. The astronaut dies and the atoms which his body is made become infinitely and chaotically distorted and mixed-and then, at the moment when everything becomes infinite (the tidal strengths, the oscillation frequencies, the distortions, and the mixing), spacetime ceases to exist." Kip S. Thorne - "Black Holes and Time Warps: Einstein's Outrageous Legacy" pg. 476
Moreover, in the Shroud of Turin, in Jesus Christ's resurrection from the dead, we have evidence of God overcoming the entropic forces of Gravity:
A Quantum Hologram of Christ's Resurrection? by Chuck Missler Excerpt: “You can read the science of the Shroud, such as total lack of gravity, lack of entropy (without gravitational collapse), no time, no space—it conforms to no known law of physics.” The phenomenon of the image brings us to a true event horizon, a moment when all of the laws of physics change drastically. Dame Piczek created a one-fourth size sculpture of the man in the Shroud. When viewed from the side, it appears as if the man is suspended in mid air (see graphic, below), indicating that the image defies previously accepted science. The phenomenon of the image brings us to a true event horizon, a moment when all of the laws of physics change drastically. http://www.khouse.org/articles/2008/847 THE EVENT HORIZON (Space-Time Singularity) OF THE SHROUD OF TURIN. - Isabel Piczek - Particle Physicist Excerpt: We have stated before that the images on the Shroud firmly indicate the total absence of Gravity. Yet they also firmly indicate the presence of the Event Horizon. These two seemingly contradict each other and they necessitate the past presence of something more powerful than Gravity that had the capacity to solve the above paradox. http://shroud3d.com/findings/isabel-piczek-image-formation Turin shroud – (Particle Physicist explains event horizon) – video https://www.youtube.com/watch?v=HHVUGK6UFK8 The Resurrection of Jesus Christ as the 'Theory of Everything' (Entropic Concerns) https://www.youtube.com/watch?v=rqv4wVP_Fkc&list=PLtAP1KN7ahia8hmDlCYEKifQ8n65oNpQ5&index=2
Thus, contrary to the claims of Darwinists that entropy presents no obstacle for Darwinian evolution, the fact of the matter is that not only is entropy not compatible with life, but entropy is found to be the primary source for death and destruction in this universe. In fact, Jesus Christ, in his defiance of gravity at his resurrection from the dead, apparently had to deal directly with the deadly force of entropy in his resurrection from the dead. Supplemental videos:
Special and General Relativity compared to Heavenly and Hellish Near Death Experiences https://www.youtube.com/watch?v=TbKELVHcvSI&list=PLtAP1KN7ahia8hmDlCYEKifQ8n65oNpQ5&index=1 Resurrection of Jesus Christ as the Theory of Everything - Centrality Concerns https://www.youtube.com/watch?v=8uHST2uFPQY&list=PLtAP1KN7ahia8hmDlCYEKifQ8n65oNpQ5&index=4
Verse:
Colossians 1:15-20 The Son is the image of the invisible God, the firstborn over all creation. For in him all things were created: things in heaven and on earth, visible and invisible, whether thrones or powers or rulers or authorities; all things have been created through him and for him. He is before all things, and in him all things hold together. And he is the head of the body, the church; he is the beginning and the firstborn from among the dead, so that in everything he might have the supremacy. For God was pleased to have all his fullness dwell in him, and through him to reconcile to himself all things, whether things on earth or things in heaven, by making peace through his blood, shed on the cross.
bornagain77
Moreover, although Darwinists have many times appealed to the 'random thermodynamic jostling' in cells to try to say that life is not designed, (i.e. Carl Zimmer's “barely constrained randomness” remark for one example), the fact of the matter is that if anything ever gave evidence for the supernatural design of the universe it is the initial 1 in 10^10^123 entropy of the universe.
“The time-asymmetry is fundamentally connected to with the Second Law of Thermodynamics: indeed, the extraordinarily special nature (to a greater precision than about 1 in 10^10^123, in terms of phase-space volume) can be identified as the “source” of the Second Law (Entropy).” Roger Penrose - The Physics of the Small and Large: What is the Bridge Between Them? How special was the big bang? – Roger Penrose Excerpt: This now tells us how precise the Creator’s aim must have been: namely to an accuracy of one part in 10^10^123. (from the Emperor’s New Mind, Penrose, pp 339-345 – 1989)
This number,1 in 10^10^123, is so large that, if it were written down in ordinary notation, it could not be written down even if you used every single particle of the universe to denote a decimal place. Moreover, the improbability of this number ,1 in 10^10^123, is so small that it drives, via Boltzmann Brain, atheistic materialism into catastrophic epistemological failure:
Multiverse and the Design Argument - William Lane Craig Excerpt: Roger Penrose of Oxford University has calculated that the odds of our universe’s low entropy condition obtaining by chance alone are on the order of 1 in 10^10(123), an inconceivable number. If our universe were but one member of a multiverse of randomly ordered worlds, then it is vastly more probable that we should be observing a much smaller universe. For example, the odds of our solar system’s being formed instantly by the random collision of particles is about 1 in 10^10(60), a vast number, but inconceivably smaller than 1 in 10^10(123). (Penrose calls it “utter chicken feed” by comparison [The Road to Reality (Knopf, 2005), pp. 762-5]). Or again, if our universe is but one member of a multiverse, then we ought to be observing highly extraordinary events, like horses’ popping into and out of existence by random collisions, or perpetual motion machines, since these are vastly more probable than all of nature’s constants and quantities’ falling by chance into the virtually infinitesimal life-permitting range. Observable universes like those strange worlds are simply much more plenteous in the ensemble of universes than worlds like ours and, therefore, ought to be observed by us if the universe were but a random member of a multiverse of worlds. Since we do not have such observations, that fact strongly disconfirms the multiverse hypothesis. On naturalism, at least, it is therefore highly probable that there is no multiverse. — Penrose puts it bluntly “these world ensemble hypothesis are worse than useless in explaining the anthropic fine-tuning of the universe”. http://www.reasonablefaith.org/multiverse-and-the-design-argument Does a Multiverse Explain the Fine Tuning of the Universe? - Dr. Craig (observer selection effect vs. Boltzmann Brains) - video https://www.youtube.com/watch?v=pb9aXduPfuA
It is also important to note the broad explanatory scope of entropy for the universe:
Shining Light on Dark Energy – October 21, 2012 Excerpt: It (Entropy) explains time; it explains every possible action in the universe;,, Even gravity, Vedral argued, can be expressed as a consequence of the law of entropy. ,,, The principles of thermodynamics are at their roots all to do with information theory. Information theory is simply an embodiment of how we interact with the universe —,,, http://crev.info/2012/10/shining-light-on-dark-energy/
In fact, entropy is the primary reason why our material, temporal, bodies grow old and die,,,
Entropy Explains Aging, Genetic Determinism Explains Longevity, and Undefined Terminology Explains Misunderstanding Both - 2007 Excerpt: There is a huge body of knowledge supporting the belief that age changes are characterized by increasing entropy, which results in the random loss of molecular fidelity, and accumulates to slowly overwhelm maintenance systems [1–4].,,, http://www.plosgenetics.org/article/info%3Adoi/10.1371/journal.pgen.0030220
Yet, despite entropy's broad explanatory scope for the universe, in the quantum zeno effect we find that “an unstable particle, if observed continuously, will never decay.”
Quantum Zeno Effect “The quantum Zeno effect is,, an unstable particle, if observed continuously, will never decay.” http://en.wikipedia.org/wiki/Quantum_Zeno_effect Interaction-free measurements by quantum Zeno stabilization of ultracold atoms – 14 April 2015 Excerpt: In our experiments, we employ an ultracold gas in an unstable spin configuration, which can undergo a rapid decay. The object—realized by a laser beam—prevents this decay because of the indirect quantum Zeno effect and thus, its presence can be detected without interacting with a single atom. http://www.nature.com/ncomms/2015/150414/ncomms7811/full/ncomms7811.html?WT.ec_id=NCOMMS-20150415 Quantum Zeno effect “It has been experimentally confirmed,, that unstable particles will not decay, or will decay less rapidly, if they are observed. Somehow, observation changes the quantum system. We’re talking pure observation, not interacting with the system in any way.” Douglas Ell – Counting to God – pg. 189 – 2014 – Douglas Ell graduated early from MIT, where he double majored in math and physics. He then obtained a masters in theoretical mathematics from the University of Maryland. After graduating from law school, magna cum laude, he became a prominent attorney.
This is just fascinating! How in blue blazes can conscious observation put a freeze on entropic decay, unless consciousness was and is more foundational to reality than the 1 in 10^10^123 initial entropy is? This finding rules out any possibility that my consciousness was and is the result of the thermodynamic processes of the universe and of my brain. In fact, I hold it to be proof that consciousness must precede material reality just as the Christian Theist presupposes. Perhaps the most compelling piece of evidence that there must be immaterial information constraining life to be so far out of thermodynamic equilibrium is to note that the thermodynamic processes of the universe are, for the most part, kept in check until the moment of death at which time the entropic forces kick in full force and, relatively quickly, disintegrate the approx. billion-trillion protein molecules of a single human body into dust. Stephen Talbott elucidates that 'fateful transition' as such:
The Unbearable Wholeness of Beings - Stephen L. Talbott - 2010 Excerpt: Virtually the same collection of molecules exists in the canine cells during the moments immediately before and after death. But after the fateful transition no one will any longer think of genes as being regulated, nor will anyone refer to normal or proper chromosome functioning. No molecules will be said to guide other molecules to specific targets, and no molecules will be carrying signals, which is just as well because there will be no structures recognizing signals. Code, information, and communication, in their biological sense, will have disappeared from the scientist’s vocabulary. ,,, the question, rather, is why things don’t fall completely apart — as they do, in fact, at the moment of death. What power holds off that moment — precisely for a lifetime, and not a moment longer? Despite the countless processes going on in the cell, and despite the fact that each process might be expected to “go its own way” according to the myriad factors impinging on it from all directions, the actual result is quite different. Rather than becoming progressively disordered in their mutual relations (as indeed happens after death, when the whole dissolves into separate fragments), the processes hold together in a larger unity. http://www.thenewatlantis.com/publications/the-unbearable-wholeness-of-beings Scientific evidence that we do indeed have an eternal soul (Elaboration on Talbott's question “What power holds off that moment — precisely for a lifetime, and not a moment longer?”)– video 2016 https://youtu.be/h2P45Obl4lQ
bornagain77
In the following paper, Dr Andy C. McIntosh, who is professor of thermodynamics and combustion theory at the University of Leeds, holds that it is non-material information that constrains biological life to be so far out of thermodynamic equilibrium. Moreover, Dr. McIntosh holds that regarding information as independent of energy and matter ‘resolves the thermodynamic issues and invokes the correct paradigm for understanding the vital area of thermodynamic/organisational interactions’.
Information and Thermodynamics in Living Systems – Andy C. McIntosh – 2013 Excerpt: ,,, information is in fact non-material and that the coded information systems (such as, but not restricted to the coding of DNA in all living systems) is not defined at all by the biochemistry or physics of the molecules used to store the data. Rather than matter and energy defining the information sitting on the polymers of life, this approach posits that the reverse is in fact the case. Information has its definition outside the matter and energy on which it sits, and furthermore constrains it to operate in a highly non-equilibrium thermodynamic environment. This proposal resolves the thermodynamic issues and invokes the correct paradigm for understanding the vital area of thermodynamic/organisational interactions, which despite the efforts from alternative paradigms has not given a satisfactory explanation of the way information in systems operates.,,, http://www.worldscientific.com/doi/abs/10.1142/9789814508728_0008
And in support of Dr. McIntosh’s contention that it must be non-material information which constrains biological life to be so far out of thermodynamic equilibrium, information has now been experimentally shown to have a ‘thermodynamic content’:
Demonic device converts information to energy – 2010 Excerpt: “This is a beautiful experimental demonstration that information has a thermodynamic content,” says Christopher Jarzynski, a statistical chemist at the University of Maryland in College Park. In 1997, Jarzynski formulated an equation to define the amount of energy that could theoretically be converted from a unit of information2; the work by Sano and his team has now confirmed this equation. “This tells us something new about how the laws of thermodynamics work on the microscopic scale,” says Jarzynski. http://www.scientificamerican.com/article.cfm?id=demonic-device-converts-inform Maxwell’s demon demonstration (knowledge of a particle’s position) turns information into energy – November 2010 Excerpt: Scientists in Japan are the first to have succeeded in converting information into free energy in an experiment that verifies the “Maxwell demon” thought experiment devised in 1867.,,, In Maxwell’s thought experiment the demon creates a temperature difference simply from information about the gas molecule temperatures and without transferring any energy directly to them.,,, Until now, demonstrating the conversion of information to energy has been elusive, but University of Tokyo physicist Masaki Sano and colleagues have succeeded in demonstrating it in a nano-scale experiment. In a paper published in Nature Physics they describe how they coaxed a Brownian particle to travel upwards on a “spiral-staircase-like” potential energy created by an electric field solely on the basis of information on its location. As the particle traveled up the staircase it gained energy from moving to an area of higher potential, and the team was able to measure precisely how much energy had been converted from information. http://www.physorg.com/news/2010-11-maxwell-demon-energy.html Information: From Maxwell’s demon to Landauer’s eraser – Lutz and Ciliberto – Oct. 25, 2015 – Physics Today Excerpt: The above examples of gedanken-turned-real experiments provide a firm empirical foundation for the physics of information and tangible evidence of the intimate connection between information and energy. They have been followed by additional experiments and simulations along similar lines.12 (See, for example, Physics Today, August 2014, page 60.) Collectively, that body of experimental work further demonstrates the equivalence of information and thermodynamic entropies at thermal equilibrium.,,, (2008) Sagawa and Ueda’s (theoretical) result extends the second law to explicitly incorporate information; it shows that information, entropy, and energy should be treated on equal footings. http://www.johnboccio.com/research/quantum/notes/Information.pdf J. Parrondo, J. Horowitz, and T. Sagawa. Thermodynamics of information. Nature Physics, 11:131-139, 2015.
It is also interesting to note just how much information is involved in keeping life so far out of thermodynamic equilibrium with the rest of the environment
Biophysics – Information theory. Relation between information and entropy: - Setlow-Pollard, Ed. Addison Wesley Excerpt: Linschitz gave the figure 9.3 x 10^12 cal/deg or 9.3 x 10^12 x 4.2 joules/deg for the entropy of a bacterial cell. Using the relation H = S/(k In 2), we find that the information content is 4 x 10^12 bits. Morowitz' deduction from the work of Bayne-Jones and Rhees gives the lower value of 5.6 x 10^11 bits, which is still in the neighborhood of 10^12 bits. Thus two quite different approaches give rather concordant figures. http://www.astroscu.unam.mx/~angel/tsb/molecular.htm “a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.” – R. C. Wysong
Of related note to immaterial information having a ‘thermodynamic content’, classical digital information was found to be a subset of ‘non-local’, (i.e. beyond space and time), quantum entanglement/information by the following method which removed heat from a computer by the deletion of data:
Quantum knowledge cools computers: New understanding of entropy – June 2011 Excerpt: No heat, even a cooling effect; In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.” http://www.sciencedaily.com/releases/2011/06/110601134300.htm Scientists show how to erase information without using energy – January 2011 Excerpt: Until now, scientists have thought that the process of erasing information requires energy. But a new study shows that, theoretically, information can be erased without using any energy at all.,,, “Landauer said that information is physical because it takes energy to erase it. We are saying that the reason it (information) is physical has a broader context than that.”, Vaccaro explained. http://www.physorg.com/news/2011-01-scientists-erase-energy.html
The preceding paper was experimentally verified
What is Information - video https://youtu.be/2AvIOzVJMCM New Scientist astounds: Information is physical – May 13, 2016 Excerpt: Recently came the most startling demonstration yet: a tiny machine powered purely by information, which chilled metal through the power of its knowledge. This seemingly magical device could put us on the road to new, more efficient nanoscale machines, a better understanding of the workings of life, and a more complete picture of perhaps our most fundamental theory of the physical world. https://uncommondescent.com/news/new-scientist-astounds-information-is-physical/
bornagain77
Of related note: ATP synthase is 'unexpectedly' found to be, thermodynamically, 100% efficient
Your Motor/Generators Are 100% Efficient – October 2011 Excerpt: ATP synthase astounds again. The molecular machine that generates almost all the ATP (molecular “energy pellets”) for all life was examined by Japanese scientists for its thermodynamic efficiency. By applying and measuring load on the top part that synthesizes ATP, they were able to determine that one cannot do better at getting work out of a motor,,, The article was edited by noted Harvard expert on the bacterial flagellum, Howard Berg. http://crev.info/content/111014-your_motor_generators Thermodynamic efficiency and mechanochemical coupling of F1-ATPase - 2011 Excerpt: F1-ATPase is a nanosized biological energy transducer working as part of FoF1-ATP synthase. Its rotary machinery transduces energy between chemical free energy and mechanical work and plays a central role in the cellular energy transduction by synthesizing most ATP in virtually all organisms.,, Our results suggested a 100% free-energy transduction efficiency and a tight mechanochemical coupling of F1-ATPase. http://www.pnas.org/content/early/2011/10/12/1106787108.short?rss=1
As well, photosynthesis itself was 'unexpectedly' found to be astonishingly efficient. Moreover, photosynthesis achieves this astonishing efficiency by overcoming thermodynamic noise by way of 'quantum coherence':
Unlocking nature's quantum engineering for efficient solar energy - January 7, 2013 Excerpt: Certain biological systems living in low light environments have unique protein structures for photosynthesis that use quantum dynamics to convert 100% of absorbed light into electrical charge,,, "Some of the key issues in current solar cell technologies appear to have been elegantly and rigorously solved by the molecular architecture of these PPCs – namely the rapid, lossless transfer of excitons to reaction centres.",,, These biological systems can direct a quantum process, in this case energy transport, in astoundingly subtle and controlled ways – showing remarkable resistance to the aggressive, random background noise of biology and extreme environments. "This new understanding of how to maintain coherence in excitons, and even regenerate it through molecular vibrations, provides a fascinating glimpse into the intricate design solutions – seemingly including quantum engineering – ,,, and which could provide the inspiration for new types of room temperature quantum devices." http://phys.org/news/2013-01-nature-quantum-efficient-solar-energy.html Uncovering Quantum Secret in Photosynthesis - June 20, 2013 Excerpt: Photosynthetic organisms, such as plants and some bacteria, have mastered this process: In less than a couple of trillionths of a second, 95 percent of the sunlight they absorb is whisked away to drive the metabolic reactions that provide them with energy. The efficiency of photovoltaic cells currently on the market is around 20 percent.,,, Van Hulst and his group have evaluated the energy transport pathways of separate individual but chemically identical, antenna proteins, and have shown that each protein uses a distinct pathway. The most surprising discovery was that the transport paths within single proteins can vary over time due to changes in the environmental conditions, apparently adapting for optimal efficiency. "These results show that coherence, a genuine quantum effect of superposition of states, is responsible for maintaining high levels of transport efficiency in biological systems, even while they adapt their energy transport pathways due to environmental influences" says van Hulst. http://www.sciencedaily.com/releases/2013/06/130620142932.htm
Moreover, protein folding is not a achieved by random thermodynamic jostling but is found to be achieved by 'quantum transition'
Physicists Discover Quantum Law of Protein Folding – February 22, 2011 Quantum mechanics finally explains why protein folding depends on temperature in such a strange way. Excerpt: First, a little background on protein folding. Proteins are long chains of amino acids that become biologically active only when they fold into specific, highly complex shapes. The puzzle is how proteins do this so quickly when they have so many possible configurations to choose from. To put this in perspective, a relatively small protein of only 100 amino acids can take some 10^100 different configurations. If it tried these shapes at the rate of 100 billion a second, it would take longer than the age of the universe to find the correct one. Just how these molecules do the job in nanoseconds, nobody knows.,,, Today, Luo and Lo say these curves can be easily explained if the process of folding is a quantum affair. By conventional thinking, a chain of amino acids can only change from one shape to another by mechanically passing though various shapes in between. But Luo and Lo say that if this process were a quantum one, the shape could change by quantum transition, meaning that the protein could ‘jump’ from one shape to another without necessarily forming the shapes in between.,,, Their astonishing result is that this quantum transition model fits the folding curves of 15 different proteins and even explains the difference in folding and unfolding rates of the same proteins. That's a significant breakthrough. Luo and Lo's equations amount to the first universal laws of protein folding. That’s the equivalent in biology to something like the thermodynamic laws in physics. http://www.technologyreview.com/view/423087/physicists-discover-quantum-law-of-protein/
Besides photosynthesis and protein folding unexpectedly not being subject to 'thermodynamic jostling', in the following paper, finding a lack of 'random' collisions in a crowded cell was a 'counterintuitive surprise' for researchers:
Proteins put up with the roar of the crowd - June 23, 2016 Excerpt: It gets mighty crowded around your DNA, but don't worry: According to Rice University researchers, your proteins are nimble enough to find what they need. Rice theoretical scientists studying the mechanisms of protein-DNA interactions in live cells showed that crowding in cells doesn't hamper protein binding as much as they thought it did.,,, If DNA can be likened to a library, it surely is a busy one. Molecules roam everywhere, floating in the cytoplasm and sticking to the tightly wound double helix. "People know that almost 90 percent of DNA is covered with proteins, such as polymerases, nucleosomes that compact two meters into one micron, and other protein molecules," Kolomeisky said.,,, That makes it seem that proteins sliding along the strand would have a tough time binding, and it's possible they sometimes get blocked. But the Rice team's theory and simulations indicated that crowding agents usually move just as rapidly, sprinting out of the way. "If they move at the same speed, the molecules don't bother each other," Kolomeisky said. "Even if they're covering a region, the blockers move away quickly so your protein can bind." In previous research, the team determined that stationary obstacles sometimes help quicken a protein's search for its target by limiting options. This time, the researchers sought to define how crowding both along DNA and in the cytoplasm influenced the process. "We may think everything's fixed and frozen in cells, but it's not," Kolomeisky said. "Everything is moving.",,, Floating proteins appear to find their targets quickly as well. "This was a surprise," he said. "It's counterintuitive, because one would think collisions between a protein and other molecules on DNA would slow it down. But the system is so dynamic (and so well designed?), it doesn't appear to be an issue." http://phys.org/news/2016-06-proteins-roar-crowd.html
In fact, in the following video at the 2:30 minute mark,, Jim Al-Khalili states that, in regards to quantum mechanics, “Biologists, on the other hand have got off lightly in my view”
",,and Physicists and Chemists have had a long time to try and get use to it (Quantum Mechanics). Biologists, on the other hand have got off lightly in my view. They are very happy with their balls and sticks models of molecules. The balls are the atoms. The sticks are the bonds between the atoms. And when they can't build them physically in the lab nowadays they have very powerful computers that will simulate a huge molecule.,, It doesn't really require much in the way of quantum mechanics in the way to explain it." Jim Al-Khalili – Quantum biology – video https://www.youtube.com/watch?v=zOzCkeTPR3Q
At the 6:52 minute mark of the video, Jim Al-Khalili goes on to state life has a certain order 'that’s very different from the random thermodynamic jostling of atoms and molecules in inanimate matter of the same complexity. In fact, living matter seems to behave in its order and its structure just like inanimate cooled down to near absolute zero':
“To paraphrase, (Erwin Schrödinger in his book “What Is Life”), he says at the molecular level living organisms have a certain order. A structure to them that’s very different from the random thermodynamic jostling of atoms and molecules in inanimate matter of the same complexity. In fact, living matter seems to behave in its order and its structure just like inanimate cooled down to near absolute zero. Where quantum effects play a very important role. There is something special about the structure, about the order, inside a living cell. So Schrodinger speculated that maybe quantum mechanics plays a role in life”. Jim Al-Khalili – Quantum biology – video https://www.youtube.com/watch?v=zOzCkeTPR3Q
And indeed, in regards to quantum biology, there is much evidence confirming the fact that current biologists working under the reductive materialistic framework of Darwinian evolution are not even using the correct theoretical framework to properly understand life in the first place:
Molecular Biology - 19th Century Materialism meets 21st Century Quantum Mechanics - video https://www.youtube.com/watch?v=rCs3WXHqOv8&index=3&list=PLtAP1KN7ahiYxgYCc-0xiUAhNWjT4q6LD
bornagain77
Moreover, even though the energy allowed to enter the atmosphere of the Earth is constrained, i.e. finely-tuned, to 1 trillionth of a trillionth of the entire electromagnetic spectrum, that still does not fully negate the disordering effects of pouring raw energy into an open system. This is made evident by the fact that objects left out in the sun age and deteriorate much more quickly than objects stored inside in cool conditions, away from the sun and heat. The following video clearly illustrates that just pouring raw energy into a 'open system' actually increases the disorder of the system,
Thermodynamic Arguments for Creation - Thomas Kindell (46:39 minute mark) - video https://www.youtube.com/watch?v=I1yto0-z2bQ&feature=player_detailpage#t=2799
To offset this disordering effect that the raw energy of sunlight has on objects, the raw energy from the sun must be processed further to be of biological utility. This is accomplished by photosynthesis which converts sunlight into ATP. To say that photosynthesis defies Darwinian explanation is to make a dramatic understatement:
Evolutionary biology: Out of thin air John F. Allen & William Martin: The measure of the problem is here: “Oxygenetic photosynthesis involves about 100 proteins that are highly ordered within the photosynthetic membranes of the cell." http://www.nature.com/nature/journal/v445/n7128/full/445610a.html The Elaborate Nanoscale Machine Called Photosynthesis: No Vestige of a Beginning - Cornelius Hunter - July 2012 Excerpt: "The ability to do photosynthesis is widely distributed throughout the bacterial domain in six different phyla, with no apparent pattern of evolution. Photosynthetic phyla include the cyanobacteria, proteobacteria (purple bacteria), green sulfur bacteria (GSB), firmicutes (heliobacteria), filamentous anoxygenic phototrophs (FAPs, also often called the green nonsulfur bacteria), and acidobacteria (Raymond, 2008)." http://darwins-god.blogspot.com/2012/07/elaborate-nanoscale-machine-called.html?showComment=1341739083709#c1202402748048253561 Enzymes and protein complexes needed in photosynthesis - with graphs http://elshamah.heavenforum.org/t1637-enzymes-and-protein-complexes-needed-in-photosynthesis#2527 The 10 Step Glycolysis Pathway In ATP Production: An Overview - video http://www.youtube.com/watch?v=8Kn6BVGqKd8
At the 14:00 minute mark of the following video, Chris Ashcraft, PhD – molecular biology, gives us an overview of the Citric Acid Cycle, which is, after the 10 step Glycolysis Pathway, also involved in ATP production:
Evolution vs ATP Synthase – Chris Ashcraft - video - citric acid cycle at 14:00 minute mark https://www.youtube.com/watch?feature=player_detailpage&v=rUV4CSs0HzI#t=746 The Citric Acid Cycle: An Overview - video http://www.youtube.com/watch?v=F6vQKrRjQcQ
Moreover, there is a profound 'chicken and egg' dilemma with ATP production for evolutionists:
Evolutionist Has Another Honest Moment as “Thorny Questions Remain” - Cornelius Hunter - July 2012 Excerpt: It's a chicken and egg question. Scientists are in disagreement over what came first -- replication, or metabolism. But there is a third part to the equation -- and that is energy. … You need enzymes to make ATP and you need ATP to make enzymes. The question is: where did energy come from before either of these two things existed? http://darwins-god.blogspot.com/2012/07/evolutionist-has-another-honest-moment.html
bornagain77
Granville Sewell asks:
Why Tornados Running Backward do not Violate the Second Law – Granville Sewell – May 2012 – article with video Excerpt: So, how does the spontaneous rearrangement of matter on a rocky, barren, planet into human brains and spaceships and jet airplanes and nuclear power plants and libraries full of science texts and novels, and supercomputers running partial differential equation solving software , represent a less obvious or less spectacular violation of the second law—or at least of the fundamental natural principle behind this law—than tornados turning rubble into houses and cars? Can anyone even imagine a more spectacular violation? https://uncommondescent.com/intelligent-design/why-tornados-running-backward-do-not-violate-the-second-law/
Granville Sewell further notes that “the very equations of entropy change upon which this compensation argument is based actually support, on closer examination, the common sense conclusion that “if an increase in order is extremely improbable when a system is isolated, it is still extremely improbable when the system is open, unless something is entering which makes it not extremely improbable.””
The Common Sense Law of Physics Granville Sewell – March 2016 Excerpt: (The) “compensation” argument, used by every physics text which discusses evolution and the second law to dismiss the claim that what has happened on Earth may violate the more general statements of the second law, was the target of my article “Entropy, Evolution, and Open Systems,” published in the proceedings of the 2011 Cornell meeting Biological Information: New Perspectives (BINP). In that article, I showed that the very equations of entropy change upon which this compensation argument is based actually support, on closer examination, the common sense conclusion that “if an increase in order is extremely improbable when a system is isolated, it is still extremely improbable when the system is open, unless something is entering which makes it not extremely improbable.” The fact that order can increase in an open system does not mean that computers can appear on a barren planet as long as the planet receives solar energy. Something must be entering our open system that makes the appearance of computers not extremely improbable, for example: computers. http://www.evolutionnews.org/2016/03/the_common_sens102725.html
Moreover Dr. Sewell has empirical evidence backing up his claim. Specifically, empirical evidence and numerical simulations tell us that “Genetic Entropy”, i.e. the tendency of biological systems to drift towards decreasing complexity, and decreasing information content, holds true as an overriding rule for biological adaptations over long periods of time:
“The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain – Michael Behe – December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain. http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/ Can Purifying Natural Selection Preserve Biological Information? – May 2013 – Paul Gibson, John R. Baumgardner, Wesley H. Brewer, John C. Sanford In conclusion, numerical simulation shows that realistic levels of biological noise result in a high selection threshold. This results in the ongoing accumulation of low-impact deleterious mutations, with deleterious mutation count per individual increasing linearly over time. Even in very long experiments (more than 100,000 generations), slightly deleterious alleles accumulate steadily, causing eventual extinction. These findings provide independent validation of previous analytical and simulation studies [2–13]. Previous concerns about the problem of accumulation of nearly neutral mutations are strongly supported by our analysis. Indeed, when numerical simulations incorporate realistic levels of biological noise, our analyses indicate that the problem is much more severe than has been acknowledged, and that the large majority of deleterious mutations become invisible to the selection process.,,, http://www.worldscientific.com/doi/pdf/10.1142/9789814508728_0010 Genetic Entropy – references to several peer reviewed numerical simulations analyzing and falsifying all flavors of Darwinian evolution,, (via John Sanford and company) http://www.geneticentropy.org/#!properties/ctzx
In their compensation argument, Darwinists claim that the second law does not contradict evolution as long as you have energy entering the 'open system'. In this case the open system is the Earth. Yet, what Darwinists do not tell you is that the energy allowed to enter the atmosphere of the Earth is constrained, i.e. finely-tuned, to 1 trillionth of a trillionth of the entire electromagnetic spectrum:
8:12 minute mark,,, "These specific frequencies of light (that enable plants to manufacture food and astronomers to observe the cosmos) represent less than 1 trillionth of a trillionth (10^-24) of the universe’s entire range of electromagnetic emissions." Fine tuning of Light, Atmosphere, and Water to Photosynthesis (etc..) – video (2016) - https://youtu.be/NIwZqDkrj9I?t=384
As the preceding video highlighted, visible light is incredibly fine-tuned for life to exist on earth. Though visible light is only a tiny fraction of the total electromagnetic spectrum coming from the sun, it happens to be the "most permitted" portion of the sun's spectrum allowed to filter through the our atmosphere. All the other bands of electromagnetic radiation, directly surrounding visible light, happen to be harmful to organic molecules, and are almost completely absorbed by the earth's magnetic shield and the earth's atmosphere. The size of light's wavelengths and the constraints on the size allowable for the protein molecules of organic life, strongly indicate that they were tailor-made for each other:
The visible portion of the electromagnetic spectrum (~1 micron) is the most intense radiation from the sun (Figure 1); has the greatest biological utility (Figure 2); and easily passes through atmosphere of Earth (Figure 3) and water (Figure 4) with almost no absorption. It is uniquely this same wavelength of radiation that is idea to foster the chemistry of life. This is either a truly amazing series of coincidences or else the result of careful design. - (Walter Bradley - Is There Scientific Evidence for the Existence of God? How the Recent Discoveries Support a Designed Universe - - http://www.leaderu.com/offices/bradley/docs/scievidence.html
Moreover, the light coming from the sun must be of the 'right color'
The " just right " relationship of the light spectrum and photosynthesis Excerpt: The American astronomer George Greenstein discusses this in The Symbiotic Universe, p 96: Chlorophyll is the molecule that accomplishes photosynthesis... The mechanism of photosynthesis is initiated by the absorption of sunlight by a chlorophyll molecule. But in order for this to occur, the light must be of the right color. Light of the wrong color won't do the trick. A good analogy is that of a television set. In order for the set to receive a given channel it must be tuned to that channel; tune it differently and the reception will not occur. It is the same with photosynthesis, the Sun functioning as the transmitter in the analogy and the chlorophyll molecule as the receiving TV set. If the molecule and the Sun are not tuned to each other-tuned in the sense of colour- photosynthesis will not occur. As it turns out, the sun's color is just right. One might think that a certain adaptation has been at work here: the adaptation of plant life to the properties of sunlight. After all, if the Sun were a different temperature could not some other molecule, tuned to absorb light of a different colour, take the place of chlorophyll? Remarkably enough the answer is no, for within broad limits all molecules absorb light of similar colours. The absorption of light is accomplished by the excitation of electrons in molecules to higher energy states, and (are) the same no matter what molecule you are discussing. Furthermore, light is composed of photons, packets of energy and photons of the wrong energy simply can not be absorbed… As things stand in reality, there is a good fit between the physics of stars and that of molecules. Failing this fit, however, life would have been impossible. The harmony between stellar and molecular physics that Greenstein refers to is a harmony too extraordinary ever to be explained by chance. There was only one chance in 10^25 of the Sun's providing just the right kind of light necessary for us and that there should be molecules in our world that are capable of using that light. This perfect harmony is unquestionably proof of Intelligent Design. http://elshamah.heavenforum.org/t1927-the-just-right-relationship-of-the-light-spectrum-and-photosynthesis
bornagain77
GD, A happy new year to you and to others. I passed by to see how UD got along for the overnight, and see your exchange with Mung. I think I should make a comment or a few. First, your basic problem is to suggest that Earth is an entropy exporter, when in fact the primary issue is that it imports heat, and thus is subject to the disorganising impact of such heat. That there is exhaust of heat to reservoir at a lower temperature is a necessary but not a sufficient condition for a successful heat engine, especially when the issue is not merely to move to order -- e.g. a hurricane in this sense is a spontaneously formed heat engine that provides orderly motion of winds [and disorders just about everything it impacts] -- but functionally specific, complex organisation and associated information. In general, assembly of complex, specifically functional organised systems is only observed on an assembly plan, i.e. on equally organised assembly. Which, is exactly what we find -- per actual OBSERVATION (not ideologically loaded inference) -- in life forms also. Think, protein synthesis if you doubt me. If life forms were as simplistic and orderly a system as a hurricane, we would not be likely to be seeing the sort of elaborate step by step construction of proteins under numerical control and algorithmic programs that we find in ribosomes and the like in the living cell. In short, there is something the living cell is trying to tell you. The point that such FSCO/I points to is that once we have equivalent binary string to describe the specific, functional info going beyond 500 - 1,000 bits worth of info, the atomic and temporal resources of the observed cosmos become maximally unlikely to ever find such islands of function -- imposed by requisites of multi-part, matched arrangement and coupling to achieve a unified result -- in beyond astronomical config spaces. That is, unobservable on 10^57 atoms [sol system scale], 10^17 s and 10^12 - 14 observations per atom per sec [~ fast org chem rxns] for the lower end, and similarly but with 10^80 atoms [observed cosmos] at the upper. At the upper end, the number of observations can be compared to a straw, and on this, the haystack to be searched will dwarf an observed cosmos of some 90 bn LY across. In short, there is a reason why on trillions of examples, FSCO/I is uniformly seen to result from intelligently directed configuration. Which brings to bear Newton's Vera Causa principle, that when we seek to explain what we have not directly observed, we must infer only to things we have observed to actually be able to cause the like result. (If we did that, the whole evolutionary materialist account of origin of life and or body plans would instantly collapse. Which would be to the good. Better to acknowledge ignorance than to pretend to know what we do not, or to impose ideological agendas by the back door of methodological naturalism. Then we can have a real look at the pivotal issue: the origin of required FSCO/I. ) That is, the inference to design as best causal explanation on seeing FSCO/I is strong, being empirically reliable [trillions of cases] and analytically grounded in a hopeless search challenge. Further to this, I find the following summary of what entropy is as a micro phenomenon is highly instructive; here taken from an admission against patent interest in a wiki article on informational views on entropy (as observed nigh on six years past and put up in my always linked note):
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
This missing info to specify view is highly instructive. And, I find it particularly saddening to see how there is still resort to the notion that as the earth is an open system entropy can decrease by its exporting energy and so there is nothing to be explained, move along nothing to see. Sorry, that is a strawman argument. KF PS: I suggest you will find here on useful, and here on also, in that always linked note. Notice, in the latter, a discussion of Clausius' context of derivation of entropy and its extension to address the sort of issue raised by FSCO/I. kairosfocus
Mung:
There’s no such thing as a decrease in entropy.
There is absolutely such a thing as a decrease in entropy. Take the classical definition of entropy: dS =rev dQ/T. That means for any reversible process, when there's heat leaving the system (dQ < 0), there will be an entropy decrease (dS < 0). (Assuming T > 0. Negative absolute temperatures are weird, and I'm not going to worry about them here.)
Things do not get cooler because of any decrease in entropy, and water does not freeze because there’s a decrease in entropy.
The entropy decrease doesn't cause cooling or freezing. It's more accurate to say that cooling and freezing are processes that cause entropy to decrease. For example, given amount of water in the liquid state has higher entropy than the same amount of ice. So when water freezes, it's moving from a higher-entropy state to a lower-entropy state. That is a decrease in entropy. Gordon Davisson
Mung: Since when is a decrease in entropy not a violation of the second law of thermodynamics? Gordon: When it’s coupled to an equal-or-larger entropy increase somewhere else. That’s why things can get cooler, water freeze, etc without violating the second law. There's no such thing as a decrease in entropy. Things do not get cooler because of any decrease in entropy, and water does not freeze because there's a decrease in entropy. Mung
Gordon:
Or take the statistical mechanics approach, where entropy is defined in terms of the number of distinct microscopically-distinct states the system might be in, where quantum mechanics gives you an absolute value.
The statistical approach agrees with the classical approach. Mung
Gordon:
Note that the classical definition of entropy only defines changes in entropy, not its absolute value.
I agree. Now apply that to your claims about the entropy of the earth. Mung
Gordon:
There are many ways of defining entropy.
How do we decide which definition applies?
There are many ways of defining entropy; the original thermodynamic definition (dS =rev dQ/T) only applies to systems at equilibrium, but the statistical definitions can be applied to a much wider variety of systems.
How many statistical definitions of entropy are there? Mung
Mung:
The earth not an isolated thermodynamic system at equilibrium. The entropy is not well-defined.
There are many ways of defining entropy; the original thermodynamic definition (dS =rev dQ/T) only applies to systems at equilibrium, but the statistical definitions can be applied to a much wider variety of systems. But actually, if the Earth is what you're interested in, you don't need to go that far. Most of Earth can be broken up up into small enough pieces that each one is very close to equilibrium, apply the classical definition to each one, then sum them to get a good approximation to the total entropy for Earth. But...
How on earth do you propose to calculate the entropy change if you can’t calculate the entropy?
Note that the classical definition of entropy only defines changes in entropy, not its absolute value. To get absolute entropies, you need to add something like the assumption that an ordered crystal at a temperature of absolute zero, has entropy zero (and then to get the entropies of things in other states, find a reversible path between that and the ordered crystal and integrate dQ/T). Or take the statistical mechanics approach, where entropy is defined in terms of the number of distinct microscopically-distinct states the system might be in, where quantum mechanics gives you an absolute value. But none of these are necessary to calculate entropy changes or apply the second law.
Since when is a decrease in entropy not a violation of the second law of thermodynamics?
When it's coupled to an equal-or-larger entropy increase somewhere else. That's why things can get cooler, water freeze, etc without violating the second law. Gordon Davisson
Gordon:
Entropy flux is essentially a way of tracking the interactions between systems that couple an entropy decrease in one system to a decrease in another system.
This sets off immediate alarm bells. Since when is a decrease in entropy not a violation of the second law of thermodynamics? Mung
Gordon:
I don’t see how this is relevant to my point.
It's relevant in that it points out that you haven't performed the relevant calculations. For example, do you know the absolute entropy of the earth? Frankly, I think the question itself, what is the entropy of the earth, is nonsensical. The earth not an isolated thermodynamic system at equilibrium. The entropy is not well-defined. Same with the surroundings. How on earth do you propose to calculate the entropy change if you can't calculate the entropy? Mung
Correct, I’m using a metaphor. Entropy isn’t really a “thing”, and hence cannot move from place to place. But it acts like a thing, so treating it as one provides a set of useful intuitions and metaphors about how it behaves.
I don't know of a single metaphor for entropy that isn't misleading. I agree that entropy is not a thing. I agree that entropy doesn't move from place to place. So what is entropy? Mung
Mung:
No one knows what the entropy of the earth is any more than they know what the entropy of the universe is.
I don't see how this is relevant to my point. Also, you could probably make a good estimate of the Earth's entropy, although you'd need to know quite a bit about its composition (hint: most of its entropy is going to be in its largest components, the mantle and core), temperature profile, etc and the thermodynamic properties of those components. As for the universe, you could also probably make a reasonable estimate of its average entropy density, but the big unknown is going to be its overall size.
And entropy is not something that is “emitted.”
Correct, I'm using a metaphor. Entropy isn't really a "thing", and hence cannot move from place to place. But it acts like a thing, so treating it as one provides a set of useful intuitions and metaphors about how it behaves. (BTW, I cited the wrong figure in my first comment. I gave the entropy of the light leaving the Earth rather than the difference. I should have 3.3e14 J/K is the figure I should have given.) Technically, what I'm talking about is the Earth's entropy flux. Entropy flux is essentially a way of tracking the interactions between systems that couple an entropy decrease in one system to a decrease in another system. For instance, if there's a near-equilibrium heat flow from one system to another, it'll be associated with an entropy flux of Q/T (amount of heat divided by the absolute temperature), meaning that the entropy of the system the heat is flowing from might decrease by up to Q/T, and the entropy of the system it's flowing to must increase by at least Q/T. So you can think of the heat flow as carrying Q/T of entropy from one system to the other, even though that's technically wrong. In the case of Earth, there's an even simpler way to put it, since the entropy flux I'm describing is just the entropy of the light entering and leaving Earth. The sunlight reaching Earth each second has entropy 3.83e13 J/K. The entropy of the thermal radiation from Earth to deep space is hard to calculate exactly, but it's easy to get a lower bound of 3.7e14 J/K per second. That means the difference is at least 3.3e14 J/K per second. Details are here. (BTW, I cited the wrong figure in my first comment. I gave the entropy of the light leaving the Earth rather than the difference. I should have 3.3e14 J/K is the figure I should have given.)
The earth is not an isolated system, nor is it at an equilibrium, nor are its surroundings.
I agree completely. Gordon Davisson
as I’ve pointed out repeatedly, the Earth as a whole emits more entropy to its surroundings I'm sorry, but this is just nonsense. No one knows what the entropy of the earth is any more than they know what the entropy of the universe is. And entropy is not something that is "emitted." The earth is not an isolated system, nor is it at an equilibrium, nor are its surroundings. Modern Thermodynamics Entropy: The Truth, the Whole Truth, and Nothing But the Truth Mung
From Rob Sheldon:
In physics, we discuss reversible and irreversible reactions. If entropy (or information) is unchanged, then the system is reversible. If entropy increases (loss of information), then the reaction cannot be reversed. Outside of Darwin’s theory of evolution, there are no irreversible reactions in which entropy decreases (information is gained), because that would enable a perpetual motion machine.
Good grief. When are you going to stop peddling this nonsense? Do you even care how thoroughly it's been refuted? Ok, let's go over why it's wrong. Again. First, identifying entropy increases with loss of information and entropy decreases with gain of information. This isn't actually wrong, but it requires you to use a very unusual definition of "information", and one that's pretty irrelevant to the usual information-based arguments for ID. To illustrate the problem, consider that cooling an object decreases its entropy, while heating an object increases its entropy. Does that mean that cooling an object increases information, and heating it looses information? If we accept Rob's view, it has to mean that. As I said, this isn't actually wrong, you just need to adopt a very unusual definition of information. Essentially, you need to be looking at how much information you have about the precise physical state of the object. Take a simple example: when water freezes, the molecules that make it up move into a much more regular arrangement (a crystal), instead of the mostly-disordered mess they made in the liquid state. This means you know a lot more about the arrangement of the molecules in the solid (crystalline) state than you did in the liquid state. Since you learn something about each molecule, and there are a lot of molecules, this winds up being a huge amount of information; about 1.5 * 10^23 bits of information per gram of water. Now, I choose freezing and melting as an example because the loss/gain of information is fairly easy (heh!) to understand in terms of knowning/not knowing about the arrangement of molecules, but the same thing happens with heating/cooling even when a phase change isn't involved. If you heat liquid water above freezing, the molecules become even less ordered, and so you lose even more information about their state. Similarly, cooling ice below freezing gains you even more information. In principle, if you could cool the water all the way to absolute zero its absolute entropy would also reach zero, and you'd have complete knowledge of the precise physical state of all of the molecules in it. So, the question for readers: to you accept that this is a valid definition of "information"? If so, you've accepted that heat flows can produce huge changes in information -- far larger than any mere intelligent human could ever produce. If you reject it, then you have to toss out Rob's claim as nonsense. Now let me look at the last part of Rob's statement:
Outside of Darwin’s theory of evolution, there are no irreversible reactions in which entropy decreases (information is gained), because that would enable a perpetual motion machine.
This part is just flat-out wrong. Entropy decreases happen all over the place: objects cooling off, freezing, condensing, many chemical reactions, etc. And those are just the easy, obvious examples. This doesn't violate the second law or enable perpetual motion machines because all instances of entropy decrease are coupled to equal-or-larger entropy increases somewhere else. When water freezes, it gives off heat, which heats up .. well, wherever the heat goes, thus increasing the entropy of that place by at least as much as the entropy decrease due to freezing. As far as the second law is concerned, evolution works exactly the same way: organisms emit heat (and entropy in other forms), thus increasing the entropy of their surroundings by more than their entropy decreases (if it does -- it doesn't always). They take in energy in low-entropy forms and dump high-entropy forms of energy to their surroundings. This is what allows them to do a number of things that go against the general trend toward thermodynamic equilibrium, and thus superficially appear to violate the second law: they grow, reproduce, evolve, maintain their state despite changes in their surroundings, etc. The same thing happens at larger scales as well: as I've pointed out repeatedly, the Earth as a whole emits more entropy to its surroundings (by at least 3.7e14 J/K per second -- which corresponds to 3.4e37 bits per second of "informatiom" if you accept the negative information = entropy view). This powers a wide variety of away-from-equilibrium processes processes on Earth, even beyond the ones relating to living organisms. Rob's claim that evolution is unique in this respect is completely and utterly wrong. Gordon Davisson

Leave a Reply