Uncommon Descent Serving The Intelligent Design Community

And once more: Life can arise naturally from chemistry!

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Yet it isn’t happening, and we have no idea how it happened even once…

From science writer Michael Gross at Cell:

Rapid progress in several research fields relating to the origin of life bring us closer to the point where it may become feasible to recreate coherent and plausible models of early life in the laboratory. (paywall)

It’s a survey article, and it concludes:

on our own planet and on many others.
“One of the main new aspects of origins research is the growing effort to connect chemistry to geology,” Jack Szostak notes. “Finding reasonable geological settings for the origin of life
is a critical aspect of understanding the whole pathway. We’ve moved beyond thinking that life emerged from the oceans or at deep sea hydrothermal vents. New ideas for surface environments that could allow organic materials to accumulate over time, so that prebiotic chemistry could happen in very concentrated solutions, are a big advance.”

We can conclude from all of this that the emergence of life in a universe that provides a suitable set of conditions, like ours does, is an entirely natural process and does not require the
postulate of a miracle birth. (Current Biology 26, R1247–R1271, December 19, 2016 R1247–49)More.

Okay. “Moved beyond” is a way of saying that hydrothermal vents are not the answer after all.

Coherent and plausible models in the lab are not the same thing as knowing what happened. And the more of them there are, the more necessary it would become to explain why life isn’t coming into existence all over the place all the time.

And at times, we are not even sure what we mean. Do some viruses meet the criterion of being alive?

A friend writes to ask: “Imagine how it would sound if a study on any other topic had the words “does not require the postulate of a miracle” in the conclusion. Somehow they seem to think that it is perfectly appropriate and natural when discussing the origin of life.”

Aw, let’s be generous, it’s New Year’s Eve: When people really haven’t got very far in a discipline for the better part of two centuries, they tend to think in terms of zero or miracle. That’s just what they do.

Another friend writes to say that the thesis seems to be: Given enough time, anything can happen. If so, the proposition does not really depend on evidence. In 1954, mid-20th century Harvard biochemist George Wald wrote,

Time is in fact the hero of the plot. The time with which we have to deal is of the order of two billion years. What we regard as impossible on the basis of human experience is meaningless here. Given so much time, the “impossible” becomes possible, the possible probable, and the probable virtually certain. One has only to wait: time itself performs the miracles. (From TalkOrigins, Wald, Scientific American, p. 48).

Really? Physicist Rob Sheldon has doubts:

In physics, we discuss reversible and irreversible reactions. If entropy (or information) is unchanged, then the system is reversible. If entropy increases (loss of information), then the reaction cannot be reversed. Outside of Darwin’s theory of evolution, there are no irreversible reactions in which entropy decreases (information is gained), because that would enable a perpetual motion machine.

Thus time is of no benefit for evolution, since a perpetual motion machine is no more possible if it runs slowly than if it runs quickly. And while errors may persist in biology because it may be too complicated to be sure of the entropy, the same cannot be said of chemistry. So the biggest boondoggle of all is attributing to precise and exact chemistry the magical anti-entropy properties of inexact and imprecise biology simply because one is a materialist reductionist who thinks life is a substance. I’m not picking on chemists or biologists, because I’ve even heard physicists say that evolution causes the multiverse to spawn us. Evidently this anti-entropy magic is just too powerful to keep it bottled up in biology alone, the world needs more perpetual motion salesmen, they spontaneously generate.

Oh well, happy New Year.

See also: Researchers: Bacteria fossils predate the origin of oxygen

Rob Sheldon: Why the sulfur-based life forms never amounted to much

Welcome to “RNA world,” the five-star hotel of origin-of-life theories

and

What we know and don’t know about the origin of life

Follow UD News at Twitter!

Comments
Meanwhile, there's no evidence that life can arise from chemistry.Mung
January 6, 2017
January
01
Jan
6
06
2017
07:32 PM
7
07
32
PM
PDT
We might profit more from reading the Russians. I have the book by A.I. Khinchin. Mathematical Foundations of Information Theory He even has one translated by George Gamow. I should check that out. Mathematical Foundations of Statistical MechanicsMung
January 6, 2017
January
01
Jan
6
06
2017
04:34 PM
4
04
34
PM
PDT
Mung, those are not exact words, I paraphrase. KFkairosfocus
January 5, 2017
January
01
Jan
5
05
2017
09:28 AM
9
09
28
AM
PDT
...so we see why if we consider the observed cosmos as an isolated system — something Sears and Salinger pointed out as philosophically loaded in their textbook, the one from which I first seriously studied these matters...
I like this quote.Mung
January 5, 2017
January
01
Jan
5
05
2017
07:51 AM
7
07
51
AM
PDT
Headlined, with diagrams: https://uncommondescent.com/intelligent-design/of-s-t-r-i-ng-s-nanobots-informational-statistical-thermodynamics-and-evolution/kairosfocus
January 5, 2017
January
01
Jan
5
05
2017
02:47 AM
2
02
47
AM
PDT
PPS: I found an elementary introduction to statistical entropy very helpful, from the Russian authors Yavorski and Pinsky, in their Physics, vol I [1974]: as we consider a simple model of diffusion, let us think of ten white and ten black balls in two rows in a container. There is of course but one way in which there are ten whites in the top row; the balls of any one colour being for our purposes identical. But on shuffling, there are 63,504 ways to arrange five each of black and white balls in the two rows, and 6-4 distributions may occur in two ways, each with 44,100 alternatives. So, if we for the moment see the set of balls as circulating among the various different possible arrangements at random, and spending about the same time in each possible state on average, the time the system spends in any given state will be proportionate to the relative number of ways that state may be achieved. Immediately, we see that the system will gravitate towards the cluster of more evenly distributed states. In short, we have just seen that there is a natural trend of change at random, towards the more thermodynamically probable macrostates, i.e the ones with higher statistical weights. So "[b]y comparing the [thermodynamic] probabilities of two states of a thermodynamic system, we can establish at once the direction of the process that is [spontaneously] feasible in the given system. It will correspond to a transition from a less probable to a more probable state." [p. 284.] This is in effect the statistical form of the 2nd law of thermodynamics. Thus, too, the behaviour of the Clausius isolated system of A and B with d'Q of heat moving A --> B by reason of B's lower temperature is readily understood: First, -d'Q/T_a is of smaller magnitude than + d'Q/T_b, as T_b is less than T_a and both are positive values; so we see why if we consider the observed cosmos as an isolated system -- something Sears and Salinger pointed out as philosophically loaded in their textbook, the one from which I first seriously studied these matters -- then a transfer or energy by reason of temperature difference [i.e. heat] will net increase entropy. Second, we bridge to the micro view if we see how importing d'Q of random molecular energy so far increases the number of ways energy can be distributed at micro-scale in B, that the resulting rise in B's entropy swamps the fall in A's entropy. That is, we have just lost a lot more information about B's micro-state than we gained about A's. Moreover, given that FSCO/I-rich micro-arrangements are relatively rare in the set of possible arrangements, we can also see why it is hard to account for the origin of such states by spontaneous processes in the scope of the observable universe. (Of course, since it is as a rule very inconvenient to work in terms of statistical weights of macrostates [i.e W], we instead move to entropy, through s = k ln W or Gibbs' more complex formulation. Part of how this is done can be seen by imagining a system in which there are W ways accessible, and imagining a partition into parts 1 and 2. W = W1*W2, as for each arrangement in 1 all accessible arrangements in 2 are possible and vice versa, but it is far more convenient to have an additive measure, i.e we need to go to logs. The constant of proportionality, k, is the famous Boltzmann constant and is in effect the universal gas constant, R, on a per molecule basis, i.e we divide R by the Avogadro Number, NA, to get: k = R/NA. The two approaches to entropy, by Clausius, and Boltzmann, of course, correspond. In real-world systems of any significant scale, the relative statistical weights are usually so disproportionate, that the classical observation that entropy naturally tends to increase, is readily apparent.) Third, the diffusion model is a LINEAR space, a string structure. This allows us to look at strings thermodynamically and statistically. Without losing force on the basic issue, let us consider the simplest case, equiprobability of position, with an alphabet of two possibilities [B vs. W balls]. Here, we see that special arrangements that may reflect strong order or organisation are vastly rarer in the set of possibilities than those that are near the peak of this distribution. For 1,000 balls, half B and half W, the peak is obviously going to be with the balls spread out in such a way that the next ball has 50-50 odds of being B or W, maximum uncertainty. Now, let us follow L K Nash and Mandl, and go to a string of 1,000 coins or a string of paramagnetic elements in a weak field. (The latter demonstrates that the coin-string model is physically relevant.) We now have binary elements, and a binomial distribution with a field of binary digits, so we know there are 1.07 *10^301 possibilities from 000 . . . 0 to 111 . . . 1 inclusive. But if we cluster possibilities by proportions that are H and T, we see that there is a sharp peak near 500:500, and that by contrast there are much fewer possibilities as we approach 1,000:0 or 0:1,000. At the extremes, as the coins are identical, there is but one way each. Likewise for alternating H, T -- a special arrangement, there are just two ways, H first, T first. We now see how order accords with compressibility of description -- more or less, algorithmic compressibility. To pick up one of the "typical" values near the peak, we essentially need to cite the string, while for the extremes, we need only give a brief description. this was Orgel's point on info capacity being correlated with length of description string. Now, as we know Trevors and Abel in 2004 pointed out that code-bearing strings [or aperiodic functional ones otherwise] will resist compressibility but will be more compressible than the utterly flat random cases. This defines an island of function. And we see that this is because any code or functionally specific string will naturally have in it some redundancy, there will not be a 50-50 even distribution in all cases. There is a statistically dominant cluster, utterly overwhelmingly dominant, near 500-500 in no particular pattern or organised functional message-bearing framework. We can now come back to the entropy view, the peak is the high entropy, low information case. That is, if we imagine some nano-bots that can rearrange coin patterns, if they act at random, they will utterly likely produce the near-500-500 no particular order result. But now, if we instruct them with a short algorithm, they can construct all H or all T, or we can give them instructions to do HT-HT . . . etc. Or, we can feed in ASCII code or some other description language based information. It is conceivable that the robots could generate such codes by chance, but the degree of isolation in the space of possibilities is such that effectively these are unobservable on the scale of the observed cosmos. As, a blind random search of the space of possibilities will be maximally unlikely to hit on the highly informational patterns. It does not matter if we were to boost the robot energy levels and speed them up to a maximum reasonable speed, that of molecular interactions and it does not matter if in effect the 10^80 atoms of the observed cosmos were given con strings and robots to flip so the strings could be flipped and read 10^12 - 14 times per s for 10^17s. Which is the sort of gamut we have available. We can confidently infer that if we see a string of 1,000 coins in a meaningful ordered or organised pattern, they were put that way by intelligently directed work, based on information. By direct import of the statistical thermodynamic reasoning we have been using. That is, we here see the basis for the confident and reliable inference to design on seeing FSCO/I. Going further, we can see that codes includes descriptions of functional organisation as per AutoCAD etc, and that such can specify any 3-d organisation of components that is functional. Where also, we can readily follow the instructions using a von Neumann universal constructor facility [make it to be self replicating and done, too] and test for observable function. Vary the instructions at random, and we soon enough see where the limits of an island of function are as function ceases. Alternatively, we can start with a random string, and then allow our nanobots to assemble. If something works, we preserve and allow further incremental, random change. That is, we have -- as a thought exercise -- an evolutionary informatics model. And, we have seen how discussion on strings is without loss of generality, as strings can describe anything else of relevance and such descriptions can be actualised as 3-d entities through a universal constructor. Which can be self-replicating, thus the test extends to evolution. (And yes, this also points tot he issue of the informational description of the universal constructor and self replication facility as the first threshold to be passed. Nor is this just a mind-game, the living cell is exactly this sort of thing, through perhaps not yet a full bore universal constructor. [Give us a couple of hundred years to figure that out and we will likely have nanobot swarms that will be just that!]) The inference at this point is obvious: by the utter dominance of non-functional configurations, 500 - 1,000 bits of information is a generous estimate of the upper limit for blind mechanisms to find functional forms. This then extends directly into looking at the genome and to the string length of proteins as an index of find-ability, thence the evaluation of plausibility of origin of life and body plan level macro-evo models. Origin of life by blind chance and/or mechanical necessity it utterly implausible. Minimal genomes are credibly 100 - 1,000 k bases, corresponding to about 100 times the size of the upper threshold. Origin of major body plans, similarly, reasonably requires some 10 - 100+ mn new bases. We are now 10 - 100 thousand times the threshold. Inference: the FSCO/I in first cell based life is there by design. Likewise that in novel body plans up to our own. And, such is rooted in the informational context of such life.kairosfocus
January 5, 2017
January
01
Jan
5
05
2017
02:13 AM
2
02
13
AM
PDT
PS: Brillouin, again from my note:
How is it possible to formulate a scientific theory of information? The first requirement is to start from a precise definition. . . . . We consider a problem involving a certain number of possible answers, if we have no special information on the actual situation. When we happen to be in possession of some information on the problem, the number of possible answers is reduced, and complete information may even leave us with only one possible answer. Information is a function of the ratio of the number of possible answers before and after, and we choose a logarithmic law in order to insure additivity of the information contained in independent situations . . . . Physics enters the picture when we discover a remarkable likeness between information and entropy. This similarity was noticed long ago by L. Szilard, in an old paper of 1929, which was the forerunner of the present theory. In this paper, Szilard was really pioneering in the unknown territory which we are now exploring in all directions. He investigated the problem of Maxwell's demon, and this is one of the important subjects discussed in this book. The connection between information and entropy was rediscovered by C. Shannon in a different class of problems, and we devote many chapters to this comparison. We prove that information must be considered as a negative term in the entropy of a system; in short, information is negentropy. The entropy of a physical system has often been described as a measure of randomness in the structure of the system. We can now state this result in a slightly different way: Every physical system is incompletely defined. We only know the values of some macroscopic variables, and we are unable to specify the exact positions and velocities of all the molecules contained in a system. We have only scanty, partial information on the system, and most of the information on the detailed structure is missing. Entropy measures the lack of information; it gives us the total amount of missing information on the ultramicroscopic structure of the system. This point of view is defined as the negentropy principle of information, and it leads directly to a generalization of the second principle of thermodynamics, since entropy and information must, be discussed together and cannot be treated separately. This negentropy principle of information will be justified by a variety of examples ranging from theoretical physics to everyday life. The essential point is to show that any observation or experiment made on a physical system automatically results in an increase of the entropy of the laboratory. It is then possible to compare the loss of negentropy (increase of entropy) with the amount of information obtained. The efficiency of an experiment can be defined as the ratio of information obtained to the associated increase in entropy. This efficiency is always smaller than unity, according to the generalized Carnot principle. Examples show that the efficiency can be nearly unity in some special examples, but may also be extremely low in other cases. This line of discussion is very useful in a comparison of fundamental experiments used in science, more particularly in physics. It leads to a new investigation of the efficiency of different methods of observation, as well as their accuracy and reliability . . . . [ Science and Information Theory, Second Edition; 1962. From an online excerpt of the Dover Reprint edition . . . ]
kairosfocus
January 5, 2017
January
01
Jan
5
05
2017
01:23 AM
1
01
23
AM
PDT
Mung, yup. The pivotal issue is to make the conceptual leap to see how information enters the picture: a puzzle posed when the Shannon average info per symbol metric took on a suspiciously familiar shape, c. 1948 . . . and there was a discussion that ended up with yup we call it "[information] entropy." A decade later, Jaynes was pointing a way, and so was Brillouin with his "negentropy" view; notice the negative of entropy value that just "pops out" in my derivation snippet above. And Harry S Robertson is truly awesome indeed. Believe it or not, I did not know what was in it (the ID debates were not in my ken at that time), it just looked like a good Thermo-D book when I bought it. KFkairosfocus
January 5, 2017
January
01
Jan
5
05
2017
01:06 AM
1
01
06
AM
PDT
Gordon Davisson:
Another reason this sort of information doesn’t have much to do with what most people think of as “information” is that it’s information about the microscopic state of the system, and that’s not something most people are concerned with. They don’t really care exactly where each nitrogen and oxygen molecule is in the air around them, but that’s the sort of information we’re talking about.
It's the same sort of information that people are accustomed to in everyday life, such as how many yes/no questions on average does it take in a game of twenty questions.Mung
January 4, 2017
January
01
Jan
4
04
2017
04:28 PM
4
04
28
PM
PDT
Gordon Davisson:
From the ID point of view, there’s an even bigger problem: since it’s related to thermodynamics, and thermodynamics is mostly about heat and energy… this sort of information is also mostly about heat and energy.
The correct way to look at it, as explained by Ben-Naim, is that entropy is a special case of the Shannon measure. So not all information measures are necessarily thermodynamic in their application. The "problem" then dissipates. The Shannon measure applies to any probability distribution, whereas thermodynamic entropy does not, as it is only applicable for certain specific distributions.Mung
January 4, 2017
January
01
Jan
4
04
2017
04:23 PM
4
04
23
PM
PDT
kairosfocus:
Thermodynamic entropy turns out to have a feasible interpretation as missing info to specify micro-state [particular cell in phase space] given macro-observable state.
What's more, this interpretation of entropy has nothing to do with order or disorder and it is easily shown that the order/disorder interpretation is in fact misleading. Notice Gordon's use of "generally" in his post.Mung
January 4, 2017
January
01
Jan
4
04
2017
04:15 PM
4
04
15
PM
PDT
As to the title of the OP... News, I'm not even certain that it has been shown that chemistry can arise naturally from physics!Mung
January 4, 2017
January
01
Jan
4
04
2017
04:06 PM
4
04
06
PM
PDT
PS: Durston et al, 2007, again using my note:
Abel and Trevors have delineated three qualitative aspects of linear digital sequence complexity [2,3], Random Sequence Complexity (RSC), Ordered Sequence Complexity (OSC) and Functional Sequence Complexity (FSC). RSC corresponds to stochastic ensembles with minimal physicochemical bias and little or no tendency toward functional free-energy binding. OSC is usually patterned either by the natural regularities described by physical laws or by statistically weighted means. For example, a physico-chemical self-ordering tendency creates redundant patterns such as highly-patterned polysaccharides and the polyadenosines adsorbed onto montmorillonite [4]. Repeating motifs, with or without biofunction, result in observed OSC in nucleic acid sequences. The redundancy in OSC can, in principle, be compressed by an algorithm shorter than the sequence itself. As Abel and Trevors have pointed out, neither RSC nor OSC, or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life [5]. FSC includes the dimension of functionality [2,3]. Szostak [6] argued that neither Shannon's original measure of uncertainty [7] nor the measure of algorithmic complexity [8] are sufficient. Shannon's classical information theory does not consider the meaning, or function, of a message. Algorithmic complexity fails to account for the observation that 'different molecular structures may be functionally equivalent'. For this reason, Szostak suggested that a new measure of information–functional information–is required [6] . . . . Shannon uncertainty, however, can be extended to measure the joint variable (X, F), where X represents the variability of data, and F functionality. This explicitly incorporates empirical knowledge of metabolic function into the measure that is usually important for evaluating sequence complexity. This measure of both the observed data and a conceptual variable of function jointly can be called Functional Uncertainty (Hf) [17], and is defined by the equation: H(Xf(t)) = -[SUM]P(Xf(t)) logP(Xf(t)) . . . (1) where Xf denotes the conditional variable of the given sequence data (X) on the described biological function f which is an outcome of the variable (F). For example, a set of 2,442 aligned sequences of proteins belonging to the ubiquitin protein family (used in the experiment later) can be assumed to satisfy the same specified function f, where f might represent the known 3-D structure of the ubiquitin protein family, or some other function common to ubiquitin. The entire set of aligned sequences that satisfies that function, therefore, constitutes the outcomes of Xf. Here, functionality relates to the whole protein family which can be inputted from a database . . . . In our approach, we leave the specific defined meaning of functionality as an input to the application, in reference to the whole sequence family. It may represent a particular domain, or the whole protein structure, or any specified function with respect to the cell. Mathematically, it is defined precisely as an outcome of a discrete-valued variable, denoted as F={f}. The set of outcomes can be thought of as specified biological states. They are presumed non-overlapping, but can be extended to be fuzzy elements . . . Biological function is mostly, though not entirely determined by the organism's genetic instructions [24-26]. The function could theoretically arise stochastically through mutational changes coupled with selection pressure, or through human experimenter involvement [13-15] . . . . The ground state g (an outcome of F) of a system is the state of presumed highest uncertainty (not necessarily equally probable) permitted by the constraints of the physical system, when no specified biological function is required or present. Certain physical systems may constrain the number of options in the ground state so that not all possible sequences are equally probable [27]. An example of a highly constrained ground state resulting in a highly ordered sequence occurs when the phosphorimidazolide of adenosine is added daily to a decameric primer bound to montmorillonite clay, producing a perfectly ordered, 50-mer sequence of polyadenosine [3]. In this case, the ground state permits only one single possible sequence . . . . The null state, a possible outcome of F denoted as ø, is defined here as a special case of the ground state of highest uncertainly when the physical system imposes no constraints at all, resulting in the equi-probability of all possible sequences or options. Such sequencing has been called "dynamically inert, dynamically decoupled, or dynamically incoherent" [28,29]. For example, the ground state of a 300 amino acid protein family can be represented by a completely random 300 amino acid sequence where functional constraints have been loosened such that any of the 20 amino acids will suffice at any of the 300 sites. From Eqn. (1) the functional uncertainty of the null state is represented as H(Xø(ti))= - [SUM]P(Xø(ti)) log P(Xø(ti)) . . . (3) where (Xø(ti)) is the conditional variable for all possible equiprobable sequences. Consider the number of all possible sequences is denoted by W. Letting the length of each sequence be denoted by N and the number of possible options at each site in the sequence be denoted by m, W = mN. For example, for a protein of length N = 257 and assuming that the number of possible options at each site is m = 20, W = 20257. Since, for the null state, we are requiring that there are no constraints and all possible sequences are equally probable, P(Xø(ti)) = 1/W and H(Xø(ti))= - [SUM](1/W) log (1/W) = log W . . . (4) The change in functional uncertainty from the null state is, therefore, ?H(Xø(ti), Xf(tj)) = log (W) - H(Xf(ti)). (5) . . . . The measure of Functional Sequence Complexity, denoted as ? [zeta], is defined as the change in functional uncertainty from the ground state H(Xg(ti)) to the functional state H(Xf(ti)), or [zeta] ? = ?H (Xg(ti), Xf(tj)) . . . (6) The resulting unit of measure is defined on the joint data and functionality variable, which we call Fits (or Functional bits). The unit Fit thus defined is related to the intuitive concept of functional information, including genetic instruction and, thus, provides an important distinction between functional information and Shannon information [6,32]. Eqn. (6) describes a measure to calculate the functional information of the whole molecule, that is, with respect to the functionality of the protein considered. The functionality of the protein can be known and is consistent with the whole protein family, given as inputs from the database. However, the functionality of a sub-sequence or particular sites of a molecule can be substantially different [12]. The functionality of a sub-molecule, though clearly extremely important, has to be identified and discovered . . . . To avoid the complication of considering functionality at the sub-molecular level, we crudely assume that each site in a molecule, when calculated to have a high measure of FSC, correlates with the functionality of the whole molecule. The measure of FSC of the whole molecule, is then the total sum of the measured FSC for each site in the aligned sequences. Consider that there are usually only 20 different amino acids possible per site for proteins, Eqn. (6) can be used to calculate a maximum Fit value/protein amino acid site of 4.32 Fits/site [NB: Log2 (20) = 4.32]. We use the formula log (20) - H(Xf) to calculate the functional information at a site specified by the variable Xf such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f. The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability, in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 1049 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space. A high Fit value for individual sites within a protein indicates sites that require a high degree of functional information. High Fit values may also point to the key structural or binding sites within the overall 3-D structure.
kairosfocus
January 4, 2017
January
01
Jan
4
04
2017
02:30 PM
2
02
30
PM
PDT
GD, Consider polymers as strings, e.g. AA and D/RNA. These strings exist in a config space of possibilities which has in it islands of function. The function is observable, as is the non function, per the cell. Thus, we can speak to issues of observability, configuration, function, search spaces etc, and we can go on to assess what Durston et al did in their 2007 paper, look at null, ground, functional states and information content, including the degree of freedom warranted per empirical observation of functional proteins. In short, Jaynes et al are not using an unusual view, they are seeing the connexion between the entropy that cropped up naturally in information theory and what lies behind the entropy developed earlier for thermodynamics. Thermodynamic entropy turns out to have a feasible interpretation as missing info to specify micro-state [particular cell in phase space] given macro-observable state. These then allow us to look at functional states in AA and D/RNA strings at first level. Note the following extended clip from Durston et al 2007. Going on, we can note that strings extend to much broader cases, as functional organisation is reducible to strings in a description language, as with AutoCAD etc. So, discussion on strings can cover functional organisation in general, and leads to a configuration space, island of function analysis. (Roughly, a config space is a phase space where we are not interested in momentum issues.) The result is much as you would expect, non-functional states form a vast sea, and for things relevant to FSCO/I, we have deeply isolated islands of function. Thus, we see the sol system and/or observed cosmos scale search challenge and why 500 - 1,000 its is a conservative threshold. At upper end, ponder 10^80 atoms working as observers and going at 10^12 - 10^14/s observations for 10^17 s, vs a space from 0000 . . . 0 to 1111 . . . 1 with 1.07*10^301 cells to be blindly searched. Where, suggested "golden" searches face the challenge that a search is a subset so for a set of n members the seat of possible searches comes from the power set, of magnitude 2^n. We see n of order 10^301 already. So, the issue is forming the concepts and applying them properly. Sure, cooling down a thermodynamic system drastically reduces missing info, e.g. freezing locks things down to an orderly pattern. That does not undercut the force of the issue we are looking at, especially when one ponders a Darwin's pond or the like as a pre-life context. KFkairosfocus
January 4, 2017
January
01
Jan
4
04
2017
02:25 PM
2
02
25
PM
PDT
Kairosfocus, I basically agree with the Jaynes view of the link between information theory and thermodynamics, and I've been aware of it for quite a long time. But as I said in my first comment here, it "requires you to use a very unusual definition of 'information', and one that’s pretty irrelevant to the usual information-based arguments for ID". Specifically, it identifies thermodynamic entropy with the amount of information missing from a macroscopic description of a system's state vs the complete information in a microscopically-detailed description of the system's state. IMO this is a perfectly legitimate way to think about thermodynamic entropy, but it doesn't have much to do with how most people think about information, nor (as far as I can see) have much to with the sorts of information that ID concerns itself with. One of the big disconnects between this sort of information and what ID is concerned with is precisely the order vs. organization distinction -- entropy (both thermodynamic and Shannon) is about order and disorder, not organization. Maximum entropy generally corresponds to the data (information) or system (thermo) being maximally disordered and random. Minimum entropy generally corresponds to it being maximally ordered. Organized systems generally have intermediate entropy. Here's something I posted a while ago on the subject:
To clarify the difference between organization, order, and disorder, let me draw on David Abel and Jack Trevors’ paper, “Three subsets of sequence complexity and their relevance to biopolymeric information” (published in Theoretical Biology and Medical Modelling 2005, 2:29). Actually, I’ll mostly draw on their Figure 4, which tries to diagram the relationships between a number of different types of (genetic) sequence complexity — random sequence complexity (RSC — roughly corresponding to disorder), ordered (OSC), and functional (FSC — roughly corresponging to organization). What I’m interested in here is the ordered-vs-random axis (horizontal on the graph), and functional axis (Y2/vertical on the graph). I’ll ignore the algorithmic compressibility axis (Y1 on the graph). Please take a look at the graph before continuing… I’ll wait… Back? Good, now, the point I want to make is that the connection between thermal and information entropy only relates to the horizontal (ordered-vs-random) axis, not the vertical (functional, or organizational) axis. The point of minimum entropy is at the left-side bottom of the graph, corresponding to pure order. The point of maximum entropy is at the right-side bottom of the graph, corresponding to pure randomness. The functional/ordered region is in between those, and will have intermediate entropy.
See my full earlier comment for examples and more details of my view on this. Another reason this sort of information doesn't have much to do with what most people think of as "information" is that it's information about the microscopic state of the system, and that's not something most people are concerned with. They don't really care exactly where each nitrogen and oxygen molecule is in the air around them, but that's the sort of information we're talking about. From the ID point of view, there's an even bigger problem: since it's related to thermodynamics, and thermodynamics is mostly about heat and energy... this sort of information is also mostly about heat and energy. Above, I quoted the example of cooling 1 cc of water off by 1° C, which decreases the water's thermodynamic entropy by 3.33e-3 cal/K, and (using this definition of information) corresponds to an information gain of 1.46e21 bits. As far as I can see, a definition of information where a huge quantity of information can be produced simply by cooling something off should be anathema to an ID argument. You can argue for this definition if you want, but I don't see how you can use it to argue for ID; it looks, if it's relevant at all, like a huge argument against ID. (Note: I'm not claiming it actually is an argument against ID; I think it's basically irrelevant. But if you think it's relevant, you have to explain why it doesn't undermine your case.)Gordon Davisson
January 4, 2017
January
01
Jan
4
04
2017
01:33 PM
1
01
33
PM
PDT
If you have an isolated system consisting of a single particle in a volume separated into equal sub-volumes by a barrier A|B and you remove the barrier, which way does the information flow? Frankly, immho, that's just silly talk. Our uncertainty as to the location of the particle increased. Did information flow out of our brain into the surrounding universe? There was no flow of information, there was only an increase in uncertainty. Shannon's measure is probabilistic. So is thermodynamics. Flow of information has nothing to do with it.Mung
January 4, 2017
January
01
Jan
4
04
2017
07:29 AM
7
07
29
AM
PDT
Gordon Davisson:
And as I said in my first comment on this post, entropy decreases are common and unremarkable.
Ben-Naim shreds popular science writing on entropy. He finds their comments remarkable. :) Information, Entropy, Life and the Universe: What We Know and What We Do Not Know The Briefest History of Time: The History of Histories of Time and the Misconstrued Association between Entropy and TimeMung
January 4, 2017
January
01
Jan
4
04
2017
07:18 AM
7
07
18
AM
PDT
kairosfocus: Summarising Harry Robertson’s Statistical Thermophysics (Prentice-Hall International, 1993) Awesome book.Mung
January 4, 2017
January
01
Jan
4
04
2017
07:11 AM
7
07
11
AM
PDT
PS: I should add, that in discussing what I have descriptively summarised as functionally specific complex organisation and/or associated information [= FSCO/I for handy short], I recognised long since that we can identify description languages that allow us to specify the parts, orientation, coupling and overall arrangement, much as AutoCAD etc do. Thus, in effect we can measure the information content of an organised system that depends on specific functional organisation of parts, trough such a reduction to a structured string of y/n questions. Redundancies in arrangements can then be addressed through don't care bits. This means that discussion on s-t-r-i-n-g-s is WLOG. Orgel and Wicken between them long since recognised this, cf Orgel in 1973 and Wicken in 1979. Wicken, 1979:
‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]
Orgel, 1973:
. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . . [HT, Mung, fr. p. 190 & 196 (and now also HT Amazon second hand books):] These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [--> this is of course equivalent to the string of yes/no questions required to specify the relevant "wiring diagram" for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here, here and here (with here on self-moved agents as designing causes).] One can see intuitively that many instructions are needed to specify a complex structure. [--> so if the q's to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions. [--> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes. [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196. Of course, that immediately highlights OOL, where the required self-replicating entity is part of what has to be explained (cf. Paley here), a notorious conundrum for advocates of evolutionary materialism; one, that has led to mutual ruin documented by Shapiro and Orgel between metabolism first and genes first schools of thought, cf here. Behe would go on to point out that irreducibly complex structures are not credibly formed by incremental evolutionary processes and Menuge et al would bring up serious issues for the suggested exaptation alternative, cf. his challenges C1 - 5 in the just linked. Finally, Dembski highlights that CSI comes in deeply isolated islands T in much larger configuration spaces W, for biological systems functional islands. That puts up serious questions for origin of dozens of body plans reasonably requiring some 10 - 100+ mn bases of fresh genetic information to account for cell types, tissues, organs and multiple coherently integrated systems. Wicken's remarks a few years later as already were cited now take on fuller force in light of the further points from Orgel at pp. 190 and 196 . . . ]
kairosfocus
January 4, 2017
January
01
Jan
4
04
2017
05:08 AM
5
05
08
AM
PDT
GD, While I have little interest in who said what when (other than, that the negentropy view did arise from the discussions on highly suggestive similarities of mathematics from the outset), I have already pointed out above, from Wiki (via my always linked briefing note) -- as an admission against known interest -- on information-entropy links beyond mere similarity of equations (as in there is is an informational school of thought on entropy, cf Harry S Robertson's Statistical Thermophysics for a discussion), a school that seems to at minimum have a serious point:
. . . we may average the information per symbol in the communication system thusly (giving in terms of -H to make the additive relationships clearer): - H = p1 log p1 + p2 log p2 + . . . + pn log pn or, H = - SUM [pi log pi] . . . Eqn 5 H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: "it is often referred to as the entropy of the source." [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1 below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] in its article on Informational Entropy (aka Shannon Information, cf also here):
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
Summarising Harry Robertson's Statistical Thermophysics (Prentice-Hall International, 1993) -- excerpting desperately and adding emphases and explanatory comments, we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.) For, as he astutely observes on pp. vii - viii:
. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .
And, in more details, (pp. 3 - 6, 7, 36, cf Appendix 1 below for a more detailed development of thermodynamics issues and their tie-in with the inference to design; also see recent ArXiv papers by Duncan and Samura here and here):
. . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . . [deriving informational entropy, cf. discussions here, here, here, here and here; also Sarfati's discussion of debates and the issue of open systems here . . . ] H({pi}) = - C [SUM over i] pi*ln pi, [. . . "my" Eqn 6] [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp - beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . . [H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . . Jayne's [summary rebuttal to a typical objection] is ". . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly 'objective' quantity . . . it is a function of [those variables] and does not depend on anybody's personality. There is no reason why it cannot be measured in the laboratory." . . . . [pp. 3 - 6, 7, 36; replacing Robertson's use of S for Informational Entropy with the more standard H.]
As is discussed briefly in Appendix 1, Thaxton, Bradley and Olsen [TBO], following Brillouin et al, in the 1984 foundational work for the modern Design Theory, The Mystery of Life's Origins [TMLO], exploit this information-entropy link, through the idea of moving from a random to a known microscopic configuration in the creation of the bio-functional polymers of life, and then -- again following Brillouin -- identify a quantitative information metric for the information of polymer molecules. For, in moving from a random to a functional molecule, we have in effect an objective, observable increment in information about the molecule. This leads to energy constraints, thence to a calculable concentration of such molecules in suggested, generously "plausible" primordial "soups." In effect, so unfavourable is the resulting thermodynamic balance, that the concentrations of the individual functional molecules in such a prebiotic soup are arguably so small as to be negligibly different from zero on a planet-wide scale. By many orders of magnitude, we don't get to even one molecule each of the required polymers per planet, much less bringing them together in the required proximity for them to work together as the molecular machinery of life. The linked chapter gives the details. More modern analyses [e.g. Trevors and Abel, here and here], however, tend to speak directly in terms of information and probabilities rather than the more arcane world of classical and statistical thermodynamics, so let us now return to that focus . . .
I trust this provides a trigger for starting a re-think on the links between entropy and information i/l/o fairly recent work and thought. KFkairosfocus
January 4, 2017
January
01
Jan
4
04
2017
04:42 AM
4
04
42
AM
PDT
Rob, thanks for coming in. But I don't think you've addressed my objections. In fact, if anything you've made it worse. Aside from continuing to push Granville Sewell (whom I've refuted here berfore), you said:
Second, entropy increase = information decrease; entropy decrease=information increase. This is not my definition, it’s Claude Shannon’s definition of information in 1940, while the definition of entropy was coined closer to 1840.
As I pointed out when you made this claim before:
First, a minor quibble: correct me if I’m wrong, but I don’t think Shannon ever used this definition. I’m not an expert on the history here, but Norbert Weiner was the first I know of to identify information with a decrease in Shannon entropy, but not thermodynamic entropy. AIUI Leon Brillouin was the first to identify it with negative thermodynamic entropy.
And as I said in my first comment on this post, entropy decreases are common and unremarkable. If you couple that with your claim about the relation between entropy and information, leads to some really silly conclusions. Here's the example I gave in that previous discussion:
But it’s even worse than that, because according to this definition, simply cooling something off increases its information by a huge amount. Consider cooling one cc (about a thimblefull) of water off by one degree Centigrade (=1.8 degrees Fahrenheit) from, say, 27° C to 26° C (absolute temperatures of 300.15 K and 299.15 K respectively). The amount of heat removed is (almost by definition) about 1 calorie, so ?Q = -1 cal, and ?Information = -?Q/T ~= – (-1 cal) / 300 K = +3.33e-3 cal/K. To convert that from thermodynamic units to information units (bits), we need to divide by Boltzmann’s constant times the natural log of 2; that’s k_B * ln(2) = 3.298e-24 cal/K * 0.6931 = 2.286e-24 cal/K. Dividing that into the entropy change we got above gives an information increase of …wait for it… 1.46e21 bits. That’s over a thousand billion billion bits of information just because a little water cooled off slightly. I’m going to go ahead and claim that this definition has almost nothing to do with what most people mean by “information”.
You never responded. And you haven't done anything to defend your original claim that Darwinian evolution is unique in decreasing entropy (or "reverses entropy-flow", if that means something different). Furthermore, since that exchange you and I got into another discussion on pretty much this same subject; here's my summary at the end:
– I argued that your criticism of Jeremy England’s work was mistaken, and based on conflating different definitions of “information”. You don’t seem to have responded. – I refuted your claim that “Darwinian physicists refuse to calculate” the relevant information flows, and showed that the actual information flow is far beyond anything evolution might require. No response. (Note: I suppose you could point out that I’m not actually a physicist, and therefore don’t qualify as a “Darwinian physicist”, but that would be quibbling. Dr. Emory F. Bunn — an actual physicist — has done a very similar calculation, just without the conversion to information units.) – I pointed out that you’d failed to convert Sir Fred Hoyle’s probability figure into the appropriate form before you cited it as information. Your response, apparently, is that I should read Hoyle so we’ll “have something to talk about beyond dimensional analysis”. Why should I bother? In the first place, I don’t need to know anything about the details of his calculation to know how to convert it to information units (or to see that you didn’t do the conversion). In the second place, he may have believed in evolution, but he doesn’t seem to have understood it at all well; therefore I doubt his calculations have any actual relevance. – Finally, I asked for a more specific reference to the “information flow” calculation you said Granville Sewell had done, and your reply seems to be “Sewell ~2010”. That’s not more specific.
You didn't respond any further. Are you going to respond seriously this time?Gordon Davisson
January 3, 2017
January
01
Jan
3
03
2017
10:47 PM
10
10
47
PM
PDT
kf, you may appreciate this excerpt from Marshall's book:
Perry Marshall, Evolution 2.0, page 153: Wanna Build a Cell? A DVD Player Might Be Easier Imagine that you’re building the world’s first DVD player. What must you have before you can turn it on and watch a movie for the first time? A DVD. How do you get a DVD? You need a DVD recorder first. How do you make a DVD recorder? First you have to define the language. When Russell Kirsch (who we met in chapter Cool created the world’s first digital image, he had to define a language for images first. Likewise you have to define the language that gets written on the DVD, then build hardware that speaks that language. Language must be defined first. Our DVD recorder/player problem is an encoding-decoding problem, just like the information in DNA. You’ll recall that communication, by definition, requires four things to exist: 1. A code 2. An encoder that obeys the rules of a code 3. A message that obeys the rules of the code 4. A decoder that obeys the rules of the code These four things—language, transmitter of language, message, and receiver of language—all have to be precisely defined in advance before any form of communication can be possible at all. A camera sends a signal to a DVD recorder, which records a DVD. The DVD player reads the DVD and converts it to a TV signal. This is conceptually identical to DNA translation. The only difference is that we don’t know how the original signal—the pattern in the first DNA strand—was encoded. The first DNA strand had to contain a plan to build something, and that plan had to get there somehow. An original encoder that translates the idea of an organism into instructions to build the organism (analogous to the camera) is directly implied. The rules of any communication system are always defined in advance by a process of deliberate choices. There must be prearranged agreement between sender and receiver, otherwise communication is impossible. By definition, a communication system cannot evolve from something simpler because evolution itself requires communication to exist first. You can’t make copies of a message without the message, and you can’t create a message without first having a language. And before that, you need intent. A code is an abstract, immaterial, nonphysical set of rules. There is no physical law that says ink on a piece of paper formed in the shape T-R-E-E should correspond to that large leafy organism in your front yard. You cannot derive the local rules of a code from the laws of physics, because hard physical laws necessarily exclude choice. On the other hand, the coder decides whether “1” means “on” or “off.” She decides whether “0” means “off” or “on.” Codes, by definition, are freely chosen. The rules of the code come before all else. These rules of any language are chosen with a goal in mind: communication, which is always driven by intent.
bornagain77
January 3, 2017
January
01
Jan
3
03
2017
01:59 PM
1
01
59
PM
PDT
RS, good point, an expert on differential equations is -- in that context! -- asking pointed questions about entropy, info flow, organisation and more. My own thoughts start from what happens when raw energy flows into a system as heat or the like. That it may exhaust some heat to a lower temperature reservoir is a secondary matter. And Clausius' heat flow situation that gives rise to the net entropy rise conclusion is based on such an inflow. GD needs to answer to the issue of increased statistical weight for the resulting macrostate. KF PS: Then, there is the inconvenient little point that DNA has coded, in effect alphabetic text expressing algorithmic steps in it, in the heart of cell based life. A point originally put on the table by Crick, certainly by March 19, 1953. This is a LINGUISTIC phenomenon.kairosfocus
January 3, 2017
January
01
Jan
3
03
2017
12:35 PM
12
12
35
PM
PDT
Thanks BA77 for saving me from having to post all those references. If you want to understand entropy, there is just one "must read" article and its written by a mathematician, Granville Sewell. I have my beef with mathematicians doing physics, but the one thing they never do is fudge the numbers. Only after you've read Sewell can we start to talk about entropy, Earth, closed systems and the like. Otherwise I'm wasting my breath. Second, entropy increase = information decrease; entropy decrease=information increase. This is not my definition, it's Claude Shannon's definition of information in 1940, while the definition of entropy was coined closer to 1840. And finally, there really has never been an answer to the objection that Darwinism reverses entropy-flow. Oh there have been many attempts to shut down debate, like global warming, but that doesn't constitute an explanation. A list of scientific book-length objections to Darwinism just crossed my desk, which contained over 50 titles starting in 1870 and continuing to the present. And while I cannot compete with GD for superlatives, those books are the awesomest, indisputabilest, cosmic evidence that the objection raised early and often, has not gone away. But as I said before, read Granville. Then we can talk.Robert Sheldon
January 3, 2017
January
01
Jan
3
03
2017
11:21 AM
11
11
21
AM
PDT
TWSYF: actually, is that set of internalised mouth-noises the illusion termed "you" throws up anything more than just that: noise, full of sound and fury, signifying nothing . . . even as it pretends to dress up in a lab coat and to be a justification for atheistical evolutionary materialism? And if that illusion "you" thinks -- whatever further illusion that is -- so, on what basis traceable to nothing but blind chance and necessity playing meaninglessly on matter through energy? KFkairosfocus
January 3, 2017
January
01
Jan
3
03
2017
09:41 AM
9
09
41
AM
PDT
Wishful thinking, speculation, and faith. The new tag line for atheism.Truth Will Set You Free
January 3, 2017
January
01
Jan
3
03
2017
08:07 AM
8
08
07
AM
PDT
I took some time to summarize BA77's argument with my own editorial insertions for impact. I hope I've done it justice. Feel free to comment, revise, etc. - and attempts to refute it are always welcome: Entropy What materialists never tell us, in contradiction to their claim that the earth is an open-system where the entry of energy permits a reduction in entropy, is that the energy allowed to enter the atmosphere of the Earth is constrained, i.e. finely-tuned, to 1 trillionth of a trillionth of the entire electromagnetic spectrum. This fine-tuning of the size of light’s wavelengths and the constraints on the size allowable for the protein molecules of organic life, strongly indicate that they were tailor-made for each other – and obviously inexplicable in the materialist story-line. Moreover, the light coming from the sun must be of the ‘right color’ – another finely-tuned aspect to support life. And even with these highly constrained factors, it still does not fully negate the disordering effects of pouring raw energy into an open system. This is made evident by the fact that objects left out in the sun age and deteriorate much more quickly than objects stored inside in cool conditions, away from the sun and heat. Instead of reducing entropy, just pouring raw energy into a ‘open system’ actually increases the disorder of the system. Again, we hear no argument thus far from materialists on these matters. To offset this disordering effect that the raw energy of sunlight has on objects, the raw energy from the sun must be processed further to be of biological utility. This is accomplished by photosynthesis which converts sunlight into ATP. Photosynthesis is an amazingly complex and dynamic process which involves about 100 proteins that are highly ordered within the photosynthetic membranes of the cell, thus further conflicting with the simplistic claim that an open-system will decrease entropy. Again, materialists ignore this and don't even attempt to explain it. The ability to do photosynthesis is widely distributed throughout the bacterial domain in six different phyla, and no one can point to pattern of evolution as its origin and source. This is simply another example of the complete ignorance offered from the evolutionary community. Addtionally, scientists are in disagreement over what came first — replication, or metabolism, or energy. You need enzymes to make ATP and you need ATP to make enzymes. The question is: where did energy come from before either of these two things existed? Nobody is holding their breath for a Darwinist response. Photosynthesis itself is finely-tuned as an entropy-reducer as results showed a 100% free-energy transduction efficiency and a tight mechanochemical coupling of F1-ATPase. There was no expectation that this would be the case since unintelligent forces are never efficient at all (and do not create finely-tuned processes). The means photosynthesis uses to achieve this astonishing efficiency in overcoming thermodynamic noise is by way of ‘quantum coherence’. Biological systems can direct a quantum process, in this case energy transport, in astoundingly subtle and controlled ways – showing remarkable resistance to the aggressive, random background noise of biology and extreme environments. Photosynthetic organisms, such as plants and some bacteria, have mastered this process: In less than a couple of trillionths of a second, 95 percent of the sunlight they absorb is whisked away to drive the metabolic reactions that provide them with energy. The efficiency of human-designed, photovoltaic cells currently on the market is around 20 percent. Again, materialists just stay silent on this. Moreover, protein folding is not a achieved by random thermodynamic jostling but is also found to be achieved by ‘quantum transition’. By conventional thinking of evolutionists, a chain of amino acids can only change from one shape to another by mechanically passing through various shapes in between. But scientists show that the process is a quantum one and that idea is entirely false. The movement of proteins in the cell also defies entropy. Almost 90 percent of DNA is covered with proteins and they are moving all the time. However, floating proteins find their targets for binding quickly as well. Scientists point out that it is counterintuitive, because one would think collisions between a protein and other molecules on DNA would slow it down. But the system defies entropy and conflicts with expectations, indicating that there is something special about the structure, about the order, inside a living cell. And indeed, in regards to quantum biology, there is much evidence confirming the fact that current biologists working under the reductive materialistic framework of Darwinian evolution are not even using the correct theoretical framework to properly understand life in the first place. They are beyond clueless. The very same materialists deny that any immaterial essences exist. Some deny that information has any empirical quality. However, it is non-material information that constrains biological life to be so far out of thermodynamic equilibrium. Information as independent of energy and matter ‘resolves the thermodynamic issues and invokes the correct paradigm for understanding the vital area of thermodynamic/organisational interactions. Information, which is immaterial, has now been experimentally shown to have a ‘thermodynamic content’. Scientists have shown that energy can be converted from units of information. Scientists in Japan succeeded in converting information into free energy. Information, entropy, and energy should be treated on equal footings. However, materialists and evolutionists have simply tried to ignore these issues. Their worldview has nothing to offer on it anyway. The quantity of information involved in keeping life so far out of thermodynamic equilibrium with the rest of the environment is enormous (4 x 10^12 bits). It’s a fundamental aspect of life – again materialists avoid mentioning this. Darwinists have many times appealed to the ‘random thermodynamic jostling’ in cells to try to say that life is not designed, (i.e. Carl Zimmer’s “barely constrained randomness” remark for one example), the fact of the matter is that if anything ever gave evidence for the supernatural design of the universe it is the initial 1 in 10^10^123 entropy of the universe. This number,1 in 10^10^123, is so large that, if it were written down in ordinary notation, it could not be written down even if you used every single particle of the universe to denote a decimal place. Yet, despite entropy’s broad explanatory scope for the universe, in the quantum zeno effect we find that “an unstable particle, if observed continuously, will never decay.” The destructive power of black holes is an example of entropy and what one should expect everywhere in the universe. However, on earth we can see in the science of the Shroud, a total lack of gravity, lack of entropy (without gravitational collapse), no time, no space—it conforms to no known law of physics. Of course, materialists have no interest in this and try to ridicule it, thus revealing their own ignorance, bias, fear and lack of wonder or interest about what reality, life and the universe is. In conclusion Thus, contrary to the claims of Darwinists that entropy presents no obstacle for Darwinian evolution, the fact of the matter is that not only is entropy not compatible with life, but entropy is found to be the primary source for death and destruction in this universe. In fact, Jesus Christ, in his defiance of gravity at his resurrection from the dead, apparently had to deal directly with the deadly force of entropy in his resurrection from the dead.Silver Asiatic
January 3, 2017
January
01
Jan
3
03
2017
06:07 AM
6
06
07
AM
PDT
Silver Asiatic, GD states:
"Silver Asiatic, I wouldn’t be so impressed with ba77’s research unless you’ve checked it out for yourself;"
Which is an interesting statement for GD to make since we have no reason whatsoever to trust anything GD says since GD, apparently, believes his conscious thoughts are merely, and ultimately, the end results of the 'random thermodynamic jostling' of the material particles of the universe and of his brain.
"Supposing there was no intelligence behind the universe, no creative mind. In that case, nobody designed my brain for the purpose of thinking. It is merely that when the atoms inside my skull happen, for physical or chemical reasons, to arrange themselves in a certain way, this gives me, as a by-product, the sensation I call thought. But, if so, how can I trust my own thinking to be true? It's like upsetting a milk jug and hoping that the way it splashes itself will give you a map of London. But if I can't trust my own thinking, of course I can't trust the arguments leading to Atheism, and therefore have no reason to be an Atheist, or anything else. Unless I believe in God, I cannot believe in thought: so I can never use thought to disbelieve in God." - C.S. Lewis, The Case for Christianity, p. 32 “It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain, I have no reason to suppose that my beliefs are true. They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms. In order to escape from this necessity of sawing away the branch on which I am sitting, so to speak, I am compelled to believe that mind is not wholly conditioned by matter”. J. B. S. Haldane ["When I am dead," in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209. Sam Harris's Free Will: The Medial Pre-Frontal Cortex Did It - Martin Cothran - November 9, 2012 Excerpt: There is something ironic about the position of thinkers like Harris on issues like this: they claim that their position is the result of the irresistible necessity of logic (in fact, they pride themselves on their logic). Their belief is the consequent, in a ground/consequent relation between their evidence and their conclusion. But their very stated position is that any mental state -- including their position on this issue -- is the effect of a physical, not logical cause. By their own logic, it isn't logic that demands their assent to the claim that free will is an illusion, but the prior chemical state of their brains. The only condition under which we could possibly find their argument convincing is if they are not true. The claim that free will is an illusion requires the possibility that minds have the freedom to assent to a logical argument, a freedom denied by the claim itself. It is an assent that must, in order to remain logical and not physiological, presume a perspective outside the physical order. http://www.evolutionnews.org/2012/11/sam_harriss_fre066221.html (1) rationality implies a thinker in control of thoughts. (2) under materialism a thinker is an effect caused by processes in the brain (determinism). (3) in order for materialism to ground rationality a thinker (an effect) must control processes in the brain (a cause). (1)&(2) (4) no effect can control its cause. Therefore materialism cannot ground rationality. per Box UD
Thus Silver Asiatic, since GD has apparently given up rationality altogether, in his denial of the reality of his own conscious mind and his free will, then we have no reason whatsoever to trust anything that GD says about anything. We might as well ask the rustling of leaves for a coherent answer to some question rather than ask GD for one. Moreover, the 'random thermodynamic jostling' of the atoms of GD's brain further suggested that you check my cited research out further, and also claimed that the quantum zeno effect "didn’t involve conscious observation at all. It used a laser beam to put the freeze on decay." The objection raised by the random jostling of GD's brain to my cited research misses the mark on a couple of fronts. First, in order for us to 'consciously observe' the atomic world in the first place it is necessary for us to use lasers or some other sort of detector. It simply is impossible for us to 'observe' atomic particles any other way in a laboratory experiment since they are so small. Yet, just because we are forced to use detectors to 'consciously observe' the actions of the atomic world, that does not 'answer the question' as to why 'consciously observing' the atomic world, even with a detector, has such a dramatic impact on how the atomic world behaves. In the following video, at the 16:34 minute mark, the reason why detector interference does not explain quantum wave collapse is explained (i.e. observation changes the nature of what we are observing not just the activity of what we are observing):
Quantum Physics And How We Affect Reality! - video - (17:21 minute mark) https://youtu.be/REATuidImYw?t=1041
Prior to that explanation in the video, Sean Carrol, an atheistic physics professor, tried to claim that it was the detector in the double slit, as GD is currently trying to claim with the zeno effect, that was solely responsible for the weird actions of the double slit. But after the interviewer pointed out that "observation changes the nature of what we are observing not just the activity of what we are observing", Sean Carrol then backed off his original claim and honestly stated this:
'The short answer is we don't know. This is the fundamental mystery of quantum mechanics. The reason why quantum mechanics is 'difficult'. Mysteriously when we look at things we see particles, when we are not looking things are waves.' Sean Carrol
Moreover, specifically because of the 'detector objection' from atheists, I cited the 'interaction free measurement' for the quantum zeno effect. I did not cite this following 'direct interaction experiment' in which the laser directly interacted with the particles:
'Zeno effect' verified—atoms won't move while you watch October 23, 2015 by Bill Steele Excerpt: The researchers demonstrated that they were able to suppress quantum tunneling merely by observing the atoms. This so-called "Quantum Zeno effect", named for a Greek philosopher, derives from a proposal in 1977 by E. C. George Sudarshan and Baidyanath Misra at the University of Texas, Austin,, who pointed out that the weird nature of quantum measurements allows, in principle, for a quantum system to be "frozen" by repeated measurements. Previous experiments have demonstrated the Zeno Effect with the "spins" of subatomic particles. "This is the first observation of the Quantum Zeno effect by real space measurement of atomic motion," Vengalattore said. "Also, due to the high degree of control we've been able to demonstrate in our experiments, we can gradually 'tune' the manner in which we observe these atoms. Using this tuning, we've also been able to demonstrate an effect called 'emergent classicality' in this quantum system." Quantum effects fade, and atoms begin to behave as expected under classical physics. The researchers observed the atoms under a microscope by illuminating them with a separate imaging laser. A light microscope can't see individual atoms, but the imaging laser causes them to fluoresce, and the microscope captured the flashes of light. When the imaging laser was off, or turned on only dimly, the atoms tunneled freely. But as the imaging beam was made brighter and measurements made more frequently, the tunneling reduced dramatically. http://phys.org/news/2015-10-zeno-effect-verifiedatoms-wont.html
One the other hand, in the interaction free measurement experiment that I actually cited in post 18, the quantum zeno effect was 'detected without interacting with a single atom' ,,
Interaction-free measurements by quantum Zeno stabilization of ultracold atoms – 14 April 2015 Excerpt: In our experiments, we employ an ultracold gas in an unstable spin configuration, which can undergo a rapid decay. The object—realized by a laser beam—prevents this decay because of the indirect quantum Zeno effect and thus, its presence can be detected without interacting with a single atom. http://www.nature.com/ncomms/2015/150414/ncomms7811/full/ncomms7811.html?WT.ec_id=NCOMMS-20150415
The principle behind 'interaction free measurement' is much more clearly explained in the following video in which it is explained that although a detector is only at a single slit in the double slit experiment, the electron still collapses in the slit with no detector by it. i.e. Just consciously knowing the particle is not at one slit forces the wave to collapse to its particle state at the other slit that has no detector by it!
An Interaction-Free Quantum Experiment (Zeilinger Bomb Tester experiment, and in the double slit Detector is only placed at one slit during the double slit yet photon or electron still collapses in the unobserved slit) - video https://www.youtube.com/watch?v=vOv8zYla1wY
Richard Conn Henry remarks on the fallacious 'decoherence' objection of atheists
The Mental Universe - Richard Conn Henry - Professor of Physics John Hopkins University Excerpt: The only reality is mind and observations, but observations are not of things. To see the Universe as it really is, we must abandon our tendency to conceptualize observations as things.,,, Physicists shy away from the truth because the truth is so alien to everyday physics. A common way to evade the mental universe is to invoke "decoherence" - the notion that "the physical environment" is sufficient to create reality, independent of the human mind. Yet the idea that any irreversible act of amplification is necessary to collapse the wave function is known to be wrong: in "Renninger-type" experiments, the wave function is collapsed simply by your human mind seeing nothing. The universe is entirely mental,,,, The Universe is immaterial — mental and spiritual. Live, and enjoy. http://henry.pha.jhu.edu/The.mental.universe.pdf
The following video more fully explains why the 'decoherence' objection by atheists does not solve the measurement problem in Quantum Mechanics:
The Measurement Problem in quantum mechanics - (Inspiring Philosophy) - 2014 video https://www.youtube.com/watch?v=qB7d5V71vUE
Thus Silver Asiatic, although GD stated that "I wouldn’t be so impressed with ba77’s research unless you’ve checked it out for yourself”, the fact of the matter is that, since GD apparently denies the reality of his conscious mind and free will, we have no reason to trust the random jostling of GD's brain in the first place. Moreover, when we look futher at the research I cited, we find that the atoms of GD's brain (purposely?) omitted the fact that I cited an experiment in which "the quantum Zeno effect (was) detected without interacting with a single atom". In other words I did not cite the experiment where a laser directly interacted with the particles exhibiting the zeno effect as GD implied I did. Moreover, the quantum zeno effect is far from the only evidence I could have cited for conscious observation having a dramatic effect on material reality. For instance, I could have also cited this recent variation of the Wheeler Delayed Choice experiment in which it was found "That Reality Doesn’t Exist If You Are Not Looking at It":
New Mind-blowing Experiment Confirms That Reality Doesn’t Exist If You Are Not Looking at It - June 3, 2015 Excerpt: The results of the Australian scientists’ experiment, which were published in the journal Nature Physics, show that this choice is determined by the way the object is measured, which is in accordance with what quantum theory predicts. “It proves that measurement is everything. At the quantum level, reality does not exist if you are not looking at it,” said lead researcher Dr. Andrew Truscott in a press release.,,, “The atoms did not travel from A to B. It was only when they were measured at the end of the journey that their wave-like or particle-like behavior was brought into existence,” he said. Thus, this experiment adds to the validity of the quantum theory and provides new evidence to the idea that reality doesn’t exist without an observer. http://themindunleashed.org/2015/06/new-mind-blowing-experiment-confirms-that-reality-doesnt-exist-if-you-are-not-looking-at-it.html “Reality is in the observations, not in the electron.” – Paul Davies “We have become participators in the existence of the universe. We have no right to say that the past exists independent of the act of observation.” – John Wheeler
Thus all in all, as usual, I find the atoms of GD's brain to be thoroughly disingenuous to the evidence at hand. Frankly, his lack of intellectual honestly is par for the course for people who oppose ID arguments. But why should we expect any different from a 'random jostling of atoms'? Of related interest to thermodynamics and consciously observing a single photon: The following researchers are thoroughly puzzled as to how it is remotely possible for us to become consciously aware of a single photon in spite of the tremendous amount of thermodynamic noise that should prevent us from ever being able to become consciously aware of a single photon:
Study suggests humans can detect even the smallest units of light - July 21, 2016 Excerpt: Research,, has shown that humans can detect the presence of a single photon, the smallest measurable unit of light. Previous studies had established that human subjects acclimated to the dark were capable only of reporting flashes of five to seven photons.,,, it is remarkable: a photon, the smallest physical entity with quantum properties of which light consists, is interacting with a biological system consisting of billions of cells, all in a warm and wet environment," says Vaziri. "The response that the photon generates survives all the way to the level of our awareness despite the ubiquitous background noise. Any man-made detector would need to be cooled and isolated from noise to behave the same way.",,, The gathered data from more than 30,000 trials demonstrated that humans can indeed detect a single photon incident on their eye with a probability significantly above chance. "What we want to know next is how does a biological system achieve such sensitivity? How does it achieve this in the presence of noise? http://phys.org/news/2016-07-humans-smallest.html
Verse:
2 Peter 1:16 For we have not followed cunningly devised fables, when we made known unto you the power and coming of our Lord Jesus Christ, but were eyewitnesses of his majesty.
bornagain77
January 3, 2017
January
01
Jan
3
03
2017
05:11 AM
5
05
11
AM
PDT
Origenes @ 25 Yes, I noticed that mistake in GD's comment also. He states:
Far from showing something special about consciousness, it shows that (at least in this respect) a conscious observer can be replaced by a beam of light.
But in the paper you cite:
The researchers observed the atoms under a microscope by illuminating them with a separate imaging laser.
The laser is the means used to enable the observations.Silver Asiatic
January 3, 2017
January
01
Jan
3
03
2017
04:58 AM
4
04
58
AM
PDT
Gordon Davidson: But if you actually look at the experiment that he cites, it didn’t involve conscious observation at all. It used a laser beam to put the freeze on decay.
Where does it say that the laser beam puts a freeze on decay? Or is it your personal hypothesis that this is the case?
Graduate students Yogesh Patil and Srivatsan K. Chakram created and cooled a gas of about a billion Rubidium atoms inside a vacuum chamber and suspended the mass between laser beams. In that state the atoms arrange in an orderly lattice just as they would in a crystalline solid. But at such low temperatures, the atoms can "tunnel" from place to place in the lattice. The famous Heisenberg uncertainty principle says that the position and velocity of a particle interact. Temperature is a measure of a particle's motion. Under extreme cold velocity is almost zero, so there is a lot of flexibility in position; when you observe them, atoms are as likely to be in one place in the lattice as another. The researchers demonstrated that they were able to suppress quantum tunneling merely by observing the atoms. This so-called "Quantum Zeno effect", named for a Greek philosopher, derives from a proposal in 1977 by E. C. George Sudarshan and Baidyanath Misra at the University of Texas, Austin, who pointed out that the weird nature of quantum measurements allows, in principle, for a quantum system to be "frozen" by repeated measurements. [source: phys.org]
Origenes
January 3, 2017
January
01
Jan
3
03
2017
04:11 AM
4
04
11
AM
PDT
1 2

Leave a Reply