Darwinist rhetorical tactics Functionally Specified Complex Information & Organization ID Foundations Science, worldview issues/foundations and society Selective Hyperskepticism thermodynamics and information

Should ID supporters argue in terms of thermodynamics or information or [“basic . . . “] probability?

Spread the love

In the still active discussion thread on failure of compensation arguments, long term maverick ID (and, I think, still YEC-sympathetic) supporter SalC comments:

SalC, 570:    . . .  I’ve argued against using information theory type arguments in defense of ID, it adds way too much confusion. Basic probability will do the job, and basic probability is clear and unassailable.

The mutliplicities of interest to ID proponents don’t vary with temperature, whereas the multiplicities from a thermodynamic perspective change with temperature. I find that very problematic for invoking 2LOT in defense of ID.

Algorithmically controlled metabolisms (such as realized in life) are low multiplicity constructs as a matter of principle. They are high in information content. But why add more jargon and terminology?

Most people understand “complex-computer-like machines such a living creatures are far from the expected outcome of random processes”. This is a subtle assertion of LLN [ –> The Law of Large Numbers in Statistics]. This is a different way of posing the Humpty Dumpty problem.

There are an infinite number of ways to make lock-and-key or login/password systems, but just because there are infinite number of ways to do this does not make them highly probable from random processes . . . . Why invoke 2LOT? Why invoke fancy terms that add confusion? One deliberately uses unclear arguments when one wishes to obfuscate, but why be unclear when evidence is on one’s side? . . . .

LLN and variations of Jonathan Wells Humpty Dumpty argument I think have proven themselves in the arena of debate. Maybe even more tersely, extrapolating Pasteur, “life does not come from non-life.”

In response, while SalC has a point where a basic and non-controversial exchange is in hand, we usually do not have this luxury and must address the technical issues to some significant extent. When we do so all of the above factors at once come out as differing facets of the same issue.

Facets, that inextricably interact as parts of a whole.

Analogous to say the hearts and arrows effect with suitably, very specifically cut and polished diamonds:

hearts_arrows_viewer

In short, once we begin to probe beneath the surface, design thought shows itself to be . . . irreducibly complex.

Accordingly, I responded as follows at 592 , which I think I should headline and augment:

_______________

KF, 592: >>Pardon, but I have a different take: on years of observing, any serious design argument will be twisted into pretzels, strawmannised, confused, clouded and generally distorted and dismissed by the sort of ruthlessly determined and too often amoral or outright nihilistic, truth and fairness disregarding objectors we frequently face.

This is because, too many such are “any means necessary”/”ends justify means” committed ideologues full of agit-prop talking points and agendas.

That’s exactly how the trained, indoctrinated Marxist agitators of my youth operated. Benumbed in conscience, insensitive to truth, fundamentally rage-blinded [even when charming], secure in their notion that they were the vanguard of the future/progress, and that they were championing pure victims of the oppressor classes who deserved anything they got.

(Just to illustrate the attitude, I remember one who accused me falsely of theft of an item of equipment kept in my lab. I promptly had it signed over to the Student Union once I understood the situation, then went to her office and confronted her with the sign off. How can you be so thin skinned was her only response; taking full advantage of the rule that men must restrain themselves in dealing with women, however outrageous the latter, and of course seeking to further wound. Ironically, this champion of the working classes was from a much higher class-origin than I was . . . actually, unsurprisingly. To see the parallels, notice how often not only objectors who come here but the major materialist agit-prop organisations — without good grounds — insinuate calculated dishonesty and utter incompetence to the point that we should not have been able to complete a degree, on our part.)

I suggest, first, that the pivot of design discussions on the world of life is functionally specific, complex interactive Wicken wiring diagram organisation of parts that achieve a whole performance based on particular arrangement and coupling, and associated information. Information that is sometimes explicit (R/DNA codes) or sometimes may be drawn out by using structured Y/N q’s that describe the wiring pattern to achieve function.

FSCO/I, for short.

{Aug. 1:}  Back to Reels to show the basic “please face and acknowledge facts” reality of FSCO/I , here the Penn International Trolling Reel exploded view:

Penn_intl_50_expl_vw

. . . and a video showing the implications of this “wiring diagram” for how it is put together in the factory:

embedded by Embedded Video

YouTube Direkt

. . . just, remember, the arm-hand system is a complex, multi-axis cybernetic manipulator-arm:

ArmModelLabel

This concept is not new, it goes back to Orgel 1973:

. . . In brief, living organisms [–> functional context] are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . .

[HT, Mung, fr. p. 190 & 196:] These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [–> this is of course equivalent to the string of yes/no questions required to specify the relevant “wiring diagram” for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002 . . . ] One can see intuitively that many instructions are needed to specify a complex structure. [–> so if the q’s to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions. [–> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes. [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196.]

. . . as well as Wicken, 1979:

‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]

. . . and is pretty directly stated by Dembski in NFL:

p. 148:“The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.

I submit that what they have in mind is specified complexity, or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .

Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. . . . In virtue of their function [[a living organism’s subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole. Dembski cites:

Wouters, p. 148: “globally in terms of the viability of whole organisms,”

Behe, p. 148: “minimal function of biochemical systems,”

Dawkins, pp. 148 – 9: “Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by ran-| dom chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction.”

On p. 149, he roughly cites Orgel’s famous remark from 1973, which exactly cited reads:

In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . .

And, p. 149, he highlights Paul Davis in The Fifth Miracle: “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity.”] . . .”

p. 144: [[Specified complexity can be more formally defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”

What happens at relevant cellular level, is that this comes down to highly endothermic C-Chemistry, aqueous medium context macromolecules in complexes that are organised to achieve highly integrated and specific interlocking functions required for metabolising, self replicating cells to function.

self_replication_mignea

This implicates huge quantities of information manifest in the highly specific functional organisation. Which is observable on a much coarser resolution than the nm range of basic molecular interactions. That is we see tightly constrained clusters of micro-level arrangements — states — consistent with function, as opposed to the much larger numbers of possible but overwhelmingly non-functional ways the same atoms and monomer components could be chemically and/or physically clumped “at random.” In turn, that is a lot fewer ways than the same could be scattered across a Darwin’s pond or the like.

{Aug. 2} For illustration let us consider the protein synthesis process at gross level:

Proteinsynthesis

. . . spotlighting and comparing the ribosome in action as a coded tape Numerically Controlled machine:

fscoi_facts

. . . then at a little more zoomed in level:

Protein Synthesis (HT: Wiki Media)
Protein Synthesis (HT: Wiki Media)

. . . then in the wider context of cellular metabolism [protein synthesis is the little bit with two call-outs in the top left of the infographic]:

cell_metabolism

Thus, starting from the “typical” diffused condition, we readily see how a work to clump at random emerges, and a further work to configure in functionally specific ways.

With implications for this component of entropy change.

As well as for the direction of the clumping and assembly process to get the right parts together, organised in the right cluster of ways that are consistent with function.

Thus, there are implications of prescriptive information that specifies the relevant wiring diagram. (Think, AutoCAD etc as a comparison.)

Pulling back, we can see that to achieve such, the reasonable — and empirically warranted — expectation, is

a: to find energy, mass and information sources and flows associated with

b: energy converters that provide shaft work or controlled flows [I use a heat engine here but energy converters are more general than that], linked to

A heat Engine partially converts heat into workc: constructors that carry out the particular work, under control of

d: relevant prescriptive information that explicitly or implicitly regulates assembly to match the wiring diagram requisites of function,

A von Neumann kinematic self-replicator
A von Neumann kinematic self-replicator

. . . [u/d Apr 13] or, comparing an contrasting a Maxwell Demon model that imposes organisation by choice with use of mechanisms, courtesy Abel:

max_vs_spontFFEq

. . . also with

e: exhaust or dissipation otherwise of degraded energy [typically, but not only, as heat . . . ] and discarding of wastes. (Which last gives relevant compensation where dS cosmos rises. Here, we may note SalC’s own recent cite on that law from Clausius, at 570 in the previous thread that shows what “relevant” implies: Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.)

{Added, April 19, 2015: Clausius’ statement:}

Clausius_1854

By contrast with such, there seems to be a strong belief that irrelevant mass and/or energy flows without coupled converters, constructors and prescriptive organising information, through phenomena such as diffusion and fluctuations can somehow credibly hit on a replicating entity that then can ratchet up into a full encapsulated, gated, metabolising, algorithmic code using self replicating cell.

Such is thermodynamically — yes, thermodynamically, informationally and probabilistically [loose sense] utterly implausible. And, the sort of implied genes first/RNA world, or alternatively metabolism first scenarios that have been suggested are without foundation in empirically observed adequate cause tracing only to blind chance and mechanical necessity.

{U/D, Apr 13:} Abel 2012 makes much the same point, in his book chapter, MOVING ‘FAR FROM EQUILIBRIUM’ IN A PREBIOTIC ENVIRONMENT: The role of Maxwell’s Demon in life origin :

Mere heterogeneity and/or order do not even begin to satisfy the necessary and sufficient conditions for life. Self-ordering tendencies provide no mechanism for self-organization, let alone abiogenesis. All sorts of physical astronomical “clumping,” weak-bonded molecular alignments, phase changes, and outright chemical reactions occur spontaneously in nature that have nothing to do with life. Life is organization-based, not order-based. As we shall see below in Section 6, order is poisonous to organization.

Stochastic ensembles of nucleotides and amino acids can polymerize naturalistically (with great difficulty). But functional sequencing of those monomers cannot be determined by any fixed physicodynamic law. It is well-known that only one 150-mer polyamino acid string out of 10^74 stochastic ensembles folds into a tertiary structure with any hint of protein function (Axe, 2004). This takes into full consideration the much publicized substitutability of amino acids without loss of function within a typical protein family membership. The odds are still only one functional protein out of 10^74 stochastic ensembles. And 150 residues are of minimal length to qualify for protein status. Worse yet, spontaneously condensed Levo-only peptides with peptide-only bonds between only biologically useful amino acids in a prebioitic environment would rarely exceed a dozen mers in length. Without polycodon prescription and sophisticated ribosome machinery, not even polypeptides form that would contribute much to “useful biological work.” . . . .

There are other reasons why merely “moving far from equilibrium” is not the key to life as seems so universally supposed. Disequilibrium stemming from mere physicodynamic constraints and self-ordering phenomena would actually be poisonous to life-origin (Abel, 2009b). The price of such constrained and self-ordering tendencies in nature is the severe reduction of Shannon informational uncertainty in any physical medium (Abel, 2008b, 2010a). Self-ordering processes preclude information generation because they force conformity and reduce freedom of selection. If information needs anything, it is the uncertainty made possible by freedom from determinism at true decisions nodes and logic gates. Configurable switch-settings must be physicodynamically inert (Rocha, 2001; Rocha & Hordijk, 2005) for genetic programming and evolution of the symbol system to take place (Pattee, 1995a, 1995b). This is the main reason that Maxwell’s Demon model must use ideal gas molecules. It is the only way to maintain high uncertainty and freedom from low informational physicochemical determinism. Only then is the control and regulation so desperately needed for organization and life-origin possible. The higher the combinatorial possibilities and epistemological uncertainty of any physical medium, the greater is the information recordation potential of that matrix.

Constraints and law-like behavior only reduce uncertainty (bit content) of any physical matrix. Any self-ordering tendency precludes the freedom from law needed to program logic gates and configurable switch settings. The regulation of life requires not only true decision nodes, but wise choices at each decision node. This is exactly what Maxwell’s Demon does. No yet-to-be discovered physicodynamic law will ever be able to replace the Demon’s wise choices, or explain the exquisite linear digital PI programming and organization of life (Abel, 2009a; Abel & Trevors, 2007). Organization requires choice contingency rather than chance contingency or law (Abel, 2008b, 2009b, 2010a). This conclusion comes via deductive logical necessity and clear-cut category differences, not just from best-thus-far empiricism or induction/abduction.

In short, the three perspectives converge. Thermodynamically, the implausibility of finding information rich FSCO/I in islands of function in vast config spaces . . .

csi_defn . . . — where we can picture the search by using coins as stand-in for one-bit registers —

sol_coin_flipr

. . . links directly to the overwhelmingly likely outcome of spontaneous processes. Such is of course a probabilistically liked outcome. And, information is often quantified on the same probability thinking.

Taking a step back to App A my always linked note, following Thaxton Bradley and Olson in TMLO 1984 and amplifying a bit:

. . . Going forward to the discussion in Ch 8, in light of the definition dG = dH – Tds, we may then split up the TdS term into contributing components, thusly:

First, dG = [dE + PdV] – TdS . . . [Eqn A.9, cf def’ns for G, H above]

But, [1] since pressure-volume work [–> the PdV term] may be seen as negligible in the context we have in mind, and [2] since we may look at dE as shifts in bonding energy [which will be more or less the same in DNA or polypeptide/protein chains of the same length regardless of the sequence of the monomers], we may focus on the TdS term. This brings us back to the clumping then configuring sequence of changes in entropy in the Micro-Jets example above:

dG = dH – T[dS”clump” +dSconfig] . . . [Eqn A.10, cf. TBO 8.5]

Of course, we have already addressed the reduction in entropy on clumping and the further reduction in entropy on configuration, through the thought expt. etc., above. In the DNA or protein formation case, more or less the same thing happens. Using Brillouin’s negentropy formulation of information, we may see that the dSconfig is the negative of the information content of the molecule.

A bit of back-tracking will help:

S = k ln W . . . Eqn A.3

{U/D Apr 19: Boltzmann’s tombstone}

Boltzmann_equation

Now, W may be seen as a composite of the ways energy as well as mass may be arranged at micro-level. That is, we are marking a distinction between the entropy component due to ways energy [here usually, thermal energy] may be arranged, and that due to the ways mass may be configured across the relevant volume. The configurational component arises from in effect the same considerations as lead us to see a rise in entropy on having a body of gas at first confined to part of an apparatus, then allowing it to freely expand into the full volume:

Free expansion:

|| * * * * * * * * | . . . . .  ||

Then:

|| * * * * * * * * ||

Or, as Prof Gary L. Bertrand of university of Missouri-Rollo summarises:

The freedom within a part of the universe may take two major forms: the freedom of the mass and the freedom of the energy. The amount of freedom is related to the number of different ways the mass or the energy in that part of the universe may be arranged while not gaining or losing any mass or energy. We will concentrate on a specific part of the universe, perhaps within a closed container. If the mass within the container is distributed into a lot of tiny little balls (atoms) flying blindly about, running into each other and anything else (like walls) that may be in their way, there is a huge number of different ways the atoms could be arranged at any one time. Each atom could at different times occupy any place within the container that was not already occupied by another atom, but on average the atoms will be uniformly distributed throughout the container. If we can mathematically estimate the number of different ways the atoms may be arranged, we can quantify the freedom of the mass. If somehow we increase the size of the container, each atom can move around in a greater amount of space, and the number of ways the mass may be arranged will increase . . . .

The thermodynamic term for quantifying freedom is entropy, and it is given the symbol S. Like freedom, the entropy of a system increases with the temperature and with volume . . . the entropy of a system increases as the concentrations of the components decrease. The part of entropy which is determined by energetic freedom is called thermal entropy, and the part that is determined by concentration is called configurational entropy.”

In short, degree of confinement in space constrains the degree of disorder/”freedom” that masses may have. And, of course, confinement to particular portions of a linear polymer is no less a case of volumetric confinement (relative to being free to take up any location at random along the chain of monomers) than is confinement of gas molecules to one part of an apparatus. And, degree of such confinement may appropriately be termed, degree of “concentration.”

Diffusion is a similar case: infusing a drop of dye into a glass of water — the particles spread out across the volume and we see an increase of entropy there. (The micro-jets case of course is effectively diffusion in reverse, so we see the reduction in entropy on clumping and then also the further reduction in entropy on configuring to form a flyable microjet.)

So, we are justified in reworking the Boltzmann expression to separate clumping/thermal and configurational components:

S = k ln (Wclump*Wconfig)

= k lnWth*Wc . . . [Eqn A.11, cf. TBO 8.2a]

or, S = k ln Wth + k ln Wc = Sth + Sc . . . [Eqn A.11.1]

We now focus on the configurational component, the clumping/thermal one being in effect the same for at-random or specifically configured DNA or polypeptide macromolecules of the same length and proportions of the relevant monomers, as it is essentially energy of the bonds in the chain, which are the same in number and type for the two cases. Also, introducing Brillouin’s negentropy formulation of Information, with the configured macromolecule [m] and the random molecule [r], we see the increment in information on going from the random to the functionally specified macromolecule:

IB = -[Scm – Scr] . . . [Eqn A.12, cf. TBO 8.3a]

Or, IB = Scr – Scm = k ln Wcr – k ln Wcm

= k ln (Wcr/Wcm) . . . [Eqn A12.1.]

Where also, for N objects in a linear chain, n1 of one kind, n2 of another, and so on to ni, we may see that the number of ways to arrange them (we need not complicate the matter by talking of Fermi-Dirac statistics, as TBO do!) is:

W = N!/[n1!n2! . . . ni!] . . . [Eqn A13, cf TBO 8.7]

So, we may look at a 100-monomer protein, with as an average 5 each of the 20 types of amino acid monomers along the chain , with the aid of log manipulations — take logs to base 10, do the sums in log form, then take back out the logs — to handle numbers over 10^100 on a calculator:

Wcr = 100!/[(5!)^20] = 1.28*10^115

For the sake of initial argument, we consider a unique polymer chain , so that each monomer is confined to a specified location, i.e Wcm = 1, and Scm = 0. This yields — through basic equilibrium of chemical reaction thermodynamics (follow the onward argument in TBO Ch 8) and the Brillouin information measure which contributes to estimating the relevant Gibbs free energies (and with some empirical results on energies of formation etc) — an expected protein concentration of ~10^-338 molar, i.e. far, far less than one molecule per planet. (There may be about 10^80 atoms in the observed universe, with Carbon a rather small fraction thereof; and 1 mole of atoms is ~ 6.02*10^23 atoms. ) Recall, known life forms routinely use dozens to hundreds of such information-rich macromolecules, in close proximity in an integrated self-replicating information system on the scale of about 10^-6 m.

Of course, if one comes at the point from any of these directions, the objections and selectively hyperskeptical demands will be rolled out to fire off salvo after salvo of objections. Selective, as the blind chance needle in haystack models that cannot pass vera causa as a test, simply are not subjected to such scrutiny and scathing dismissiveness by the same objectors. When seriously pressed, the most they are usually prepared to concede, is that perhaps we don’t yet know enough, but rest assured “Science” will triumph so don’t you dare put up “god of the gaps” notions.

To see what I mean, notice [HT: BA 77 et al] the bottomline of a recent article on OOL conundrums:

. . . So the debate rages on. Over the past few decades scientists have edged closer to understanding the origin of life, but there is still some way to go, which is probably why when Robyn Williams asked Lane, ‘What was there in the beginning, do you think?’, the scientist replied wryly: ‘Ah, “think”. Yes, we have no idea, is the bottom line.’

But in fact, adequate cause for FSCO/I is not hard to find: intelligently directed configuration meeting requisites a – e just above. Design.

There are trillions of cases in point.

And that is why I demand that — whatever flaws, elaborations, adjustments etc we may find or want to make — we need to listen carefully and fairly to Granville Sewell’s core point:

The-Emperor-has-no-clothes-illustration-8x61
You are under arrest, for bringing the Emperor into disrepute . . .

. . . The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur.

The discovery that life on Earth developed through evolutionary “steps,” coupled with the observation that mutations and natural selection — like other natural forces — can cause (minor) change, is widely accepted in the scientific world as proof that natural selection — alone among all natural forces — can create order out of disorder, and even design human brains, with human consciousness. Only the layman seems to see the problem with this logic. In a recent Mathematical Intelligencer article [“A Mathematician’s View of Evolution,” The Mathematical Intelligencer 22, number 4, 5-7, 2000] I asserted that the idea that the four fundamental forces of physics alone could rearrange the fundamental particles of Nature into spaceships, nuclear power plants, and computers, connected to laser printers, CRTs, keyboards and the Internet, appears to violate the second law of thermodynamics in a spectacular way.1 . . . .

What happens in a[n isolated] system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. As I wrote in “Can ANYTHING Happen in an Open System?”, “order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door…. If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth’s atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here.” Evolution is a movie running backward, that is what makes it special.

THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn’t, that atoms would rearrange themselves into spaceships and computers and TV sets . . . [NB: Emphases added. I have also substituted in isolated system terminology as GS uses a different terminology.]

Surely, there is room to listen, and to address concerns on the merits. >>

_______________

I think we need to appreciate that the design inference applies to all three of thermodynamics, information and probability, and that we will find determined objectors who will attack all three in a selectively hyperskeptical manner.  We therefore need to give adequate reasons for what we hold, for the reasonable onlooker. END

PS: As it seems unfortunately necessary, I here excerpt the Wikipedia “simple” summary derivation of 2LOT from statistical mechanics considerations as at April 13, 2015 . . . a case of technical admission against general interest, giving the case where distributions are not necessarily equiprobable. This shows the basis of the point that for over 100 years now, 2LOT has been inextricably rooted in statistical-molecular considerations (where, it is those considerations that lead onwards to the issue that FSCO/I, which naturally comes in deeply isolated islands of function in large config spaces, will be maximally implausible to discover through blind, needle in haystack search on chance and mechanical necessity):

Wiki_2LOT_fr_stat_mWith this in hand, I again cite a favourite basic College level Physics text, as summarised in my online note App I:

Yavorski and Pinski, in the textbook Physics, Vol I [MIR, USSR, 1974, pp. 279 ff.], summarise the key implication of the macro-state and micro-state view well: as we consider a simple model of diffusion, let us think of ten white and ten black balls in two rows in a container. There is of course but one way in which there are ten whites in the top row; the balls of any one colour being for our purposes identical. But on shuffling, there are 63,504 ways to arrange five each of black and white balls in the two rows, and 6-4 distributions may occur in two ways, each with 44,100 alternatives. So, if we for the moment see the set of balls as circulating among the various different possible arrangements at random, and spending about the same time in each possible state on average, the time the system spends in any given state will be proportionate to the relative number of ways that state may be achieved. Immediately, we see that the system will gravitate towards the cluster of more evenly distributed states. In short, we have just seen that there is a natural trend of change at random, towards the more thermodynamically probable macrostates, i.e the ones with higher statistical weights. So “[b]y comparing the [thermodynamic] probabilities of two states of a thermodynamic system, we can establish at once the direction of the process that is [spontaneously] feasible in the given system. It will correspond to a transition from a less probable to a more probable state.” [p. 284.] This is in effect the statistical form of the 2nd law of thermodynamics. Thus, too, the behaviour of the Clausius isolated system above [with interacting sub-systemd A and B that transfer d’Q to B due to temp. difference] is readily understood: importing d’Q of random molecular energy so far increases the number of ways energy can be distributed at micro-scale in B, that the resulting rise in B’s entropy swamps the fall in A’s entropy. Moreover, given that [FSCO/I]-rich micro-arrangements are relatively rare in the set of possible arrangements, we can also see why it is hard to account for the origin of such states by spontaneous processes in the scope of the observable universe. (Of course, since it is as a rule very inconvenient to work in terms of statistical weights of macrostates [i.e W], we instead move to entropy, through s = k ln W. Part of how this is done can be seen by imagining a system in which there are W ways accessible, and imagining a partition into parts 1 and 2. W = W1*W2, as for each arrangement in 1 all accessible arrangements in 2 are possible and vice versa, but it is far more convenient to have an additive measure, i.e we need to go to logs. The constant of proportionality, k, is the famous Boltzmann constant and is in effect the universal gas constant, R, on a per molecule basis, i.e we divide R by the Avogadro Number, NA, to get: k = R/NA. The two approaches to entropy, by Clausius, and Boltzmann, of course, correspond. In real-world systems of any significant scale, the relative statistical weights are usually so disproportionate, that the classical observation that entropy naturally tends to increase, is readily apparent.)

This underlying context is easily understood and leads logically to 2LOT as an overwhelmingly likely consequence. Beyond a reasonable scale, fluctuations beyond a very narrow range are statistical miracles, that we have no right to expect to observe.

And, that then refocusses the issue of connected, concurrent energy flows to provide compensation for local entropy reductions.

 

83 Replies to “Should ID supporters argue in terms of thermodynamics or information or [“basic . . . “] probability?

  1. 1
    kairosfocus says:

    F/N: For the record, on why I will continue to address probability/expectation/fluctuation issues, information (and particularly FSCO/I) and linked thermodynamics issues tied to the statistical underpinnings of 2LOT. KF

    PS: For fellow anglers, I found a vid on how Penn International reels are made, and use it to illustrate the reality and requisites of FSCO/I.

  2. 2

    Often the terminology used by those we debate against itself reveals what is hidden underneath, like when our opponents use the term “is consistent with” or demand we answer “are you saying the 2LoT has been violated?”

    What is being hidden is that they are preventing a reasonable discussion of the plausibility, as opposed to the bare possibility, that certain arrangements of matter can be in good faith be expected to spontaneously occur given any amount of time and given the observed probabilistic/entropic habits of matter/energy/information.

    Saying that X arrangement “could have” occurred and still be “consistent with” 2LoT (or some other probabilistic principle) is essentially avoiding the argument. Saying that “highly improbable things happen all the time” is avoiding the argument.

  3. 3
    kairosfocus says:

    F/N: In glancing at GS’ just released essay collection, I note this:

    In a June 15, 2012 post at http://www.evolutionnews.org , Max Planck Institute biologist W.E. L¨onnig said “Nor-mally the better your arguments are, the more people open their minds to your theory, but with ID, the better your arguments are, the more they close their minds, and the angrier they become. This is science upside down.”

    This seems to be a good slice of the problem we face.

    KF

  4. 4
    niwrad says:

    Thanks kairosfocus for your bold article.
    I agree, the ID arguments are countless. Some IDers like more one, while other IDers like another. Anyway, there is room for all, and yes, as you say, they are “facets that inextricably interact as parts of a whole”.

  5. 5
    kairosfocus says:

    Niw & WJM:

    I think it is time to stand; but then, that is literally written into my name and blood, backed by 1,000 years of history. Tends to give a nose for kairos.

    Why not clip Wiki, counselling us aright against its known general leanings:

    In rhetoric kairos is “a passing instant when an opening appears which must be driven through with force if success is to be achieved.”[1]

    Kairos was central to the Sophists, who stressed the rhetor’s ability to adapt to and take advantage of changing, contingent circumstances. In Panathenaicus, Isocrates writes that educated people are those “who manage well the circumstances which they encounter day by day, and who possess a judgment which is accurate in meeting occasions as they arise and rarely misses the expedient course of action”.

    Kairos is also very important in Aristotle’s scheme of rhetoric. Kairos is, for Aristotle, the time and space context in which the proof will be delivered. Kairos stands alongside other contextual elements of rhetoric: The Audience, which is the psychological and emotional makeup of those who will receive the proof; and To Prepon, which is the style with which the orator clothes the proof.

    (My favourite case lies in Ac 17, where Paul was literally laughed out of court in Athens. But at length, the few, the despised and dismissed, the ridiculed prevailed. Because, of the sheer raw power of their case never mind who dominate and manipulate at the moment.)

    Now, yes, it is a bold stance to say that as facets of a Jewel cannot be separated without destroying its function, the LLN-probability/fluctuations, statistical thermodynamics, informational view and information facets of the design case are inextricably, irreducibly interactive and mutually reinforcing. With, the statistical underpinnings of 2LOT being prominent.

    It is worth clipping SalC’s own recent cite on that law from Clausius, at 570 in the previous thread:

    Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time

    He of course emphasises the reference to heat; I have highlighted the reference to relevance of the concurrent energy process. I also, again, refer to the fact that for 100+ years, 2LOT has been inextricably linked to statistical underpinnings that go well beyond just heat flow.

    I again cite a favourite basic College level Physics text, as summarised in my online note App I:

    Yavorski and Pinski, in the textbook Physics, Vol I [MIR, USSR, 1974, pp. 279 ff.], summarise the key implication of the macro-state and micro-state view well: as we consider a simple model of diffusion, let us think of ten white and ten black balls in two rows in a container. There is of course but one way in which there are ten whites in the top row; the balls of any one colour being for our purposes identical. But on shuffling, there are 63,504 ways to arrange five each of black and white balls in the two rows, and 6-4 distributions may occur in two ways, each with 44,100 alternatives. So, if we for the moment see the set of balls as circulating among the various different possible arrangements at random, and spending about the same time in each possible state on average, the time the system spends in any given state will be proportionate to the relative number of ways that state may be achieved. Immediately, we see that the system will gravitate towards the cluster of more evenly distributed states. In short, we have just seen that there is a natural trend of change at random, towards the more thermodynamically probable macrostates, i.e the ones with higher statistical weights. So “[b]y comparing the [thermodynamic] probabilities of two states of a thermodynamic system, we can establish at once the direction of the process that is [spontaneously] feasible in the given system. It will correspond to a transition from a less probable to a more probable state.” [p. 284.] This is in effect the statistical form of the 2nd law of thermodynamics. Thus, too, the behaviour of the Clausius isolated system above [with interacting sub-systemd A and B that transfer d’Q to B due to temp. difference] is readily understood: importing d’Q of random molecular energy so far increases the number of ways energy can be distributed at micro-scale in B, that the resulting rise in B’s entropy swamps the fall in A’s entropy. Moreover, given that [FSCO/I]-rich micro-arrangements are relatively rare in the set of possible arrangements, we can also see why it is hard to account for the origin of such states by spontaneous processes in the scope of the observable universe. (Of course, since it is as a rule very inconvenient to work in terms of statistical weights of macrostates [i.e W], we instead move to entropy, through s = k ln W. Part of how this is done can be seen by imagining a system in which there are W ways accessible, and imagining a partition into parts 1 and 2. W = W1*W2, as for each arrangement in 1 all accessible arrangements in 2 are possible and vice versa, but it is far more convenient to have an additive measure, i.e we need to go to logs. The constant of proportionality, k, is the famous Boltzmann constant and is in effect the universal gas constant, R, on a per molecule basis, i.e we divide R by the Avogadro Number, NA, to get: k = R/NA. The two approaches to entropy, by Clausius, and Boltzmann, of course, correspond. In real-world systems of any significant scale, the relative statistical weights are usually so disproportionate, that the classical observation that entropy naturally tends to increase, is readily apparent.)

    This underlying context is easily understood and leads logically to 2LOT as an overwhelmingly likely consequence. Beyond a reasonable scale, fluctuations beyond a very narrow range are statistical miracles, that we have no right to expect to observe.

    And, that then refocusses the issue of connected, concurrent energy flows to provide compensation for local entropy reductions.

    As I point out in the OP, where something like origin of FSCO/I is concerned, the only actually observed pattern is that:

    . . . the reasonable — and empirically warranted — expectation, is

    a: to find energy, mass and information sources and flows associated with

    b: energy converters that provide shaft work or controlled flows [I use a heat engine here but energy converters are more general than that], linked to

    A heat Engine partially converts heat into workc: constructors that carry out the particular work, under control of

    d: relevant prescriptive information that explicitly or implicitly regulates assembly to match the wiring diagram requisites of function, also with

    e: exhaust or dissipation otherwise of degraded energy [typically, but not only, as heat . . . ] and discarding of wastes. (Which last gives relevant compensation where dS cosmos rises.)

    In short, appeal to irrelevant “compensation” is questionable.

    That is the context in which we can see the force of your remark, WJM, that:

    Often the terminology used by those we debate against itself reveals what is hidden underneath, like when our opponents use the term “is consistent with” or demand we answer “are you saying the 2LoT has been violated?”

    What is being hidden is that they are preventing a reasonable discussion of the plausibility, as opposed to the bare possibility, that certain arrangements of matter can be in good faith be expected to spontaneously occur given any amount of time and given the observed probabilistic/entropic habits of matter/energy/information.

    Saying that X arrangement “could have” occurred and still be “consistent with” 2LoT (or some other probabilistic principle) is essentially avoiding the argument. Saying that “highly improbable things happen all the time” is avoiding the argument.

    The studious avoiding of relevance and of the actually observed pattern that occurs, speaks tellingly.

    It is in abstract logically and physically possible for arbitrarily large fluctuations to occur. But, of course.

    Such, by virtue of applicable statistics are also vanishingly unlikely. Just as much the case, but suspiciously absent from serious discussion, and mocked, nit-picked or dismissed when the likes of a Sewell says:

    . . . The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur.

    The discovery that life on Earth developed through evolutionary “steps,” coupled with the observation that mutations and natural selection — like other natural forces — can cause (minor) change, is widely accepted in the scientific world as proof that natural selection — alone among all natural forces — can create order out of disorder, and even design human brains, with human consciousness. Only the layman seems to see the problem with this logic. In a recent Mathematical Intelligencer article [“A Mathematician’s View of Evolution,” The Mathematical Intelligencer 22, number 4, 5-7, 2000] I asserted that the idea that the four fundamental forces of physics alone could rearrange the fundamental particles of Nature into spaceships, nuclear power plants, and computers, connected to laser printers, CRTs, keyboards and the Internet, appears to violate the second law of thermodynamics in a spectacular way.1 . . . .

    What happens in a[n isolated] system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. As I wrote in “Can ANYTHING Happen in an Open System?”, “order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door…. If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth’s atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here.” Evolution is a movie running backward, that is what makes it special.

    THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn’t, that atoms would rearrange themselves into spaceships and computers and TV sets . . . [NB: Emphases added. I have also substituted in isolated system terminology as GS uses a different terminology.]

    That it seems that evolutionary materialism is wedded to such statistical miracles and is quite willing to exploit institutional clout to silence “but the emperor is naked” are not healthy signs.

    It is time to stand.

    KF

  6. 6
    kairosfocus says:

    You are under arrest for bringing the Emperor into disrepute?

    — cf added pic

  7. 7
    Seversky says:

    The quote from Sewell reads like an extended statement of Hoyle’s Fallacy which, as we know, is a specific case of the strawman fallacy.

    I ckallenge you to find anyone – Darwinist, evomat, a/mat, whatever – who has argued that complex machines can spring into existence fully-formed ex nihilo. The only person who even suggested that a complete Boeing 747 could be whisked up by a tornado from parts in a junkyard was Hoyle. Did he really think he was the only one to have spotted this problem?

    If you want a very simple a/mat explanation of how complex machines could be created by natural processes, one possibility is you start with basic chemicals which assemble more complex molecules, which form self-replicating structures, which over vast spans of time give rise to creatures such as ourselves who are able to design and build Boeing 747s amongst other things. You know this hypothesis as well as we do and we are both aware of the difficulties in finding evidence that it did or even could happen.

    The problem is that your alternative of Intelligent Design offers no better explanatory purchase.

    First, it answers a different question to the one you are asking of science. It is a suggestion of who might have done it, not how it was done, which is what you are demanding science tells us.

    Second, let us accept, for the sake of argument, that the emergence of life is so improbable that the only reasonable explanation is the intervention of an intelligent agent. To create life – and possibly a universe in which it can survive – such an agent must contain the necessary amount of knowledge or information which, from our perspective, would make it nearly if not actually omniscient.

    Such a being must, therefore, itself be hugely complex and, hence, hugely improbable. Which prompts the perfectly reasonable question of whence came the Designer? The only rational explanation for such a hugely improbable entity is another Designer – and so on.

    The only way to halt an infinite regress is to posit an uncaused first cause (UFC). The problem is that a UFC smacks too much of being just a ploy to get you out of an unpalatable alternative. Declare a UFC by fiat not because there is any good reason to think such a thing exists. Besides, a UFC must itself be infinite to escape having a beginning so it doesn’t really get you out of the quandary of having to choose between two equally unsatisfactory alternatives.

  8. 8
    kairosfocus says:

    Sev,

    nope . . . label pejoratively [and dismissively smear a Nobel-equivalent prize-holder expert in thermodynamics into the bargain] does not work.

    First something as simple as a fishing reel or a D’Arsonval galvanometer is well beyond the threshold of 500 – 1,000 bits description length for config, and arguably a nut and bold mechanism too.

    Where, the threshold of sufficient complexity that fluctuations do not explain is well, well short of what is needed for:

    a: encapsulated [thus shielded from cross reactions],

    b: intelligently gated [to pass in valid nutrients and out wastes]

    c: metabolic [to couple energy and mass flows with prescriptive info, joined to energy converters and constructors to configure requisite components and harvest required energy]

    d: von Neumann code-using kinematic self replicator using [to self replicate]

    e: C-Chemistry aqueous medium [to have relevant underlying chemistry]

    f: protein using [cellular workhorse family of molecules]

    g: d/rna using [for information stores and more]

    h: relevantly homochiral [for geometric handedness requisite for key-lock fitting etc]

    . . . cell based life.

    Consequently a whole industry has sprung up to propose a step by step bridge to precursors in warm ponds or whatever other environment du jour is currently favoured.

    Basic problem?

    No viable answer that is backed up by empirically demonstrated adequacy, not involving huge investigator manipulation, has been found. Nor is any such in prospect.

    So claims and just so stories about replicators, RNA worlds and the like fall to the ground.

    Never mind, your suggestions like:

    If you want a very simple a/mat explanation of how complex machines could be created by natural processes, one possibility is you start with basic chemicals which assemble more complex molecules, which form self-replicating structures, which over vast spans of time give rise to creatures such as ourselves who are able to design and build Boeing 747s amongst other things. You know this hypothesis as well as we do and we are both aware of the difficulties in finding evidence that it did or even could happen.

    Until you can provide solid answers including RELEVANT compensating flows, relying on statistical miracles and just so stories backed by ideological imposition of materialism are not good enough. Those original first step replicators, those intermediate steps, those forst living cells or reasonable models need to be shown on empirical observation not ideological speculation and gross extrapolations. That is the root node challenge in the UD pro darwinism essay challenge that has been on the table for, what two and a half years, unanswered. As well you know from trying to push it aside and getting back to knocking at intelligently designed configuration.

    Where, that — aka design — is in fact the only empirically warranted adequate cause for FSCO/I to the point where we are fully warranted to infer per best current explanation from FSCO/I to design as credible cause. Never mind your dismissive one liner.

    But, let’s spot you the bigger “half” of the problem, getting to a viable cell.

    Onward, much the same problem still obtains as FSCO/I is needed in copious quantities; needed in many distinct sub systems to get viable body plans for complex life forms. Maybe 4 dozen new cell types plus embryological assembly programs or the equivalent to actually build body plans. Genomes some 10 – 100+ mn bases each, on reasonable estimates and observations.

    Where, FSCO/I because configs have that just-so specificity indicated by the wiring diagram pattern (up to some tolerance), comes in islands deeply isolated in config spaces.

    The search resources to cross the intervening seas of non-function, just are not there.

    Body plan level macroevo runs into much the same difficulty.

    So, for OOL there is a threshold required for viable life that poses a conundrum. And empirical demonstration of causal adequacy is persistently missing. For body plans, the complexity threshold for viability poses much the same problem.

    So, all at once or step by step, Hoyle had a point.

    And the empirical grounding required for evolutionary materialist models of OOL or OO body plans [OOBP, and done] simply is not there.

    So, yet again, with due allowance for rhetorical flourishes on tornadoes and junkyards, sir Fred had a point.

    KF

  9. 9
    kairosfocus says:

    Sev

    Let me take up:

    The problem is that your alternative of Intelligent Design offers no better explanatory purchase.

    First, it answers a different question to the one you are asking of science. It is a suggestion of who might have done it, not how it was done, which is what you are demanding science tells us.

    The first of these is a bare ideological dismissal.

    FSCO/I is real, is relevant to life and is ROUTINELY and ONLY seen to have just one adequate cause, design. Intelligently directed configuration.

    Design would naturally implicate appropriate energy and mass flows, prescriptive information, coupling to energy converters and constructors, as well as exhaust of degraded energy and wastes.

    Where, we already see from Venter et al precursor technologies, and something like the manipulation techniques down to atoms already seen in say the tunnelling microscope shots with IBM spelled out with atoms etc show there is no roadblock to precise molecular scale manipulation.

    Thermodynamics is satisfied, information, mass and energy flow requisites are satisfied, coupling and relevant compensation are satisfied.

    All we need to do it ourselves is several generations of progress.

    And yes, I openly expect relevant nanotech to be in place across this century, towards solar system colonisation.

    Next, where do you come from with a who might have done it?

    Have you so disregarded open statements, repeatedly on record since Thaxton et al, that inference to design as process relevant to cell based life does not involve being able to infer that a relevant designer is within or beyond the cosmos? Where, the phenomenon in hand, is cell based life on earth? Where, we are observing it?

    I have repeatedly stated that on the phenomenon of earth based cellular life, a molecular nanotech lab some generations beyond Venter would do. Indeed, I strongly believe, across this century, will do. Though, I think nanotech replicators to do industrial revo 3.0 are more what will be in mind: nanotech fabricators with self replication.

    Have you fallen into believing your own propaganda?

    Other than that, I find it incredible to see declarations like that.

    If you want to see where I do think evidence points beyond the cosmos, shift gears to cosmological fine tuning joined to and interacting with ontological, cosmological and moral government considerations. Where, most of that goes into another intellectual realm completely, straight philosophy. And where such scientific aspects as obtain stand distinct from origin of cell based life and its diversification.

    If I did not have strong reason to think FSCO/I as a SCIENTIFIC matter is a strong sign of design, I would still be a theist. And holding that FSCO/I in life based on cells points to design is not a basis for that theism.

    Period.

    If, that is what is clouding your mind.

    How twerdun is not a problem for design.

    We know design is adequate to FSCO/I and we know technologies are in hand that point to doing it at molecular nanotech level. Indeed IIRC people have shown D/RNA manipulation to store info. Venter IIRC.

    Vera causa — demonstrated causal adequacy, is passed for intelligently directed configuration.

    But, if you are looking to blind chance and/or mechanical necessity creating FSCO/I, that is an utterly different matter.

    It has never been shown to be causally adequate, and BIG claims are on the table.

    In addition, such claims are inherently mechanistic and so need to demonstrate causal adequacy.

    All I am seeing above is a back-handed way of admitting that vera causa has not been passed, multiplied by an attempt to suggest that intelligently directed configuration cannot claim vera causa.

    Thanks for the admission, if back-handed is all you can give, we will take it.

    And, design routinely causes FSCO/I with evidence in hand that it can do so with relevant molecular nanotechs.

    Right now, I’d say the smart money is on design.

    KF

  10. 10
    kairosfocus says:

    Sev:

    Now for:

    >> let us accept, for the sake of argument, that the emergence of life is so improbable>>

    1 –> vastly improbable by blind chance and mechanical necessity. To which, you have no valid counter.

    >> that the only reasonable explanation is the intervention of an intelligent agent.>>

    2 –> The only empirically observed, analytically plausible source of FSCO/I is design.

    3 –> Where the statistics underpinning 2LOT is pivotal to seeing why such is so.

    >>To create life>>

    4 –> A distinct task, separate from origin of a cosmos fitted for life

    >> – and possibly a universe in which it can survive –>>

    5 –> This is prior and the evidence of fine tuning that sets up a habitat for cell based life is decisive.

    >> such an agent must contain the necessary amount of knowledge or information>>

    6 –> You conflate knowledge, a function of rationally contemplative mind, with data storage and information processing, which can be blindly mechanical and would be GIGO-limited.

    7 –> Knowledge, is well warranted, credibly true and/or reliable belief, so it requires rationally contemplative subjects.

    8 –> To try to get to such from data and signals processed by some blindly mechanical computational substrate is to try to get North by insistently heading West.

    9 –> Absurdly at cross purposes, as say Reppert highlighted:

    . . . let us suppose that brain state A, which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [[But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [[so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.

    >> which, from our perspective, would make it nearly if not actually omniscient.>>

    10 –> A cosmological designer would be highly knowledgeable and massively powerful, which would include the possibility of omniscience and omnipotence. Join such with ontological, cosmological and moral reasoning and you are getting somewhere: an inherently good creator God, a necessary [thus eternal] and maximally great being, who is worthy of being served by doing the good.

    11 –> But the designer of cell based life does not require any such constraint. Within 100 years, I am confident we will be able to do it.

    >>Such a being must, therefore, itself be hugely complex>>

    12 –> Nope, we have no basis for inferring that mind is composite and complicated, as opposed to computational substrates.

    >>and, hence, hugely improbable.>>

    13 –> attempted reductio, fails because of strawman creation.

    >> Which prompts the perfectly reasonable question of whence came the Designer? The only rational explanation for such a hugely improbable entity is another Designer – and so on.

    The only way to halt an infinite regress is to posit an uncaused first cause (UFC).>>

    14 –> Further strawmen. By projecting implicit substitutions of computational substrates etc and conflating information processing with reasoning, you end with a sophomoric caricature of theism that has been recently popularised by Dawkins.

    15 –> Instead, the issue is modes of being and causal adequacy (with the reality of our being under moral government involved). As I pointed out in the OP for a recent thread that you did not try to raise such an argument on:

    Let me do a basic outline of key points:

    1: A world, patently exists.

    2: Nothing, denotes just that, non-being.

    3: A genuine nothing, can have no causal capacity.

    4: If ever there were an utter nothing, that is exactly what would forever obtain.

    5: But, per 1, we and a world exist, so there was always something.

    6: This raises the issue of modes of being, first possible vs impossible.

    7: A possible being would exist if a relevant state of affairs were realised, e.g. heat + fuel + oxidiser + chain rxn –> fire (a causal process, showing fire to depend on external enabling factors)

    8: An impossible being such as a square circle has contradictory core characteristics and cannot be in any possible world. (Worlds being patently possible as one is actual.)

    9: Of possible beings, we see contingent ones, e.g. fires. This also highlights that if something begins, there are circumstances under which it may not be, and so, it is contingent and is caused as the fire illustrates.

    10: Our observed cosmos had a beginning and is caused. This implies a deeper root of being, as necessarily, something always was.

    11: Another possible mode of being is a necessary being. To see such, consider a candidate being that has no dependence on external, on/off enabling factors.

    12: Such (if actual) has no beginning and cannot end, it is either impossible or actual and would exist in any possible world. For instance, a square circle is impossible, One and the same object cannot be circular and square in the same sense and place at the same time . . . but there is no possible world in which twoness does not exist.

    13: To see such, begin with the set that collects nothing and proceed:

    { } –> 0

    {0} –> 1

    {0, 1} –> 2

    Etc.

    14: We thus see on analysis of being, that we have possible vs impossible and of possible beings, contingent vs necessary.

    15: Also, that of serious candidate necessary beings, they will either be impossible or actual in any possible world. That’s the only way they can be, they have to be in the [world-]substructure in some way so that once a world can exist they are there necessarily.

    16: Something like a flying spaghetti monster or the like, is contingent [here, not least as composed of parts and materials], and is not a serious candidate. (Cf also the discussions in the linked thread for other parodies and why they fail.)

    17: By contrast, God is a serious candidate necessary being, The Eternal Root of being. Where, a necessary being root of reality is the best class of candidates to always have been.

    18: The choice, as discussed in the already linked, is between God as impossible or as actual. Where, there is no good reason to see God as impossible, or not a serious candidate to be a necessary being, or to be contingent, etc.

    19: So, to deny God is to imply and to need to shoulder the burden of showing God impossible. [U/D April 4, 2015: We can for illustrative instance cf. a form of Godel’s argument, demonstrated to be valid: . . . ]

    20: Moreover, we find ourselves under moral government, to be under OUGHT.

    21: This, post the valid part of Hume’s guillotine argument (on pain of the absurdity of ultimate amorality and might/manipulation makes ‘right’) implies that there is a world foundational IS that properly bears the weight of OUGHT.

    22: Across many centuries of debates, there is only one serious candidate: the inherently good, eternal creator God, a necessary and maximally great being worthy of loyalty, respect, service through doing the good and even worship.

    23: Where in this course of argument, no recourse has been had to specifically religious experiences or testimony of same, or to religious traditions; we here have what has been called the God of the philosophers, with more than adequate reason to accept his reality such that it is not delusional or immature to be a theist or to adhere to ethical theism.

    24: Where, ironically, we here see exposed, precisely the emotional appeal and hostility of too many who reject and dismiss the reality of God (and of our being under moral government) without adequate reason.

    So, it would seem the shoe is rather on the other foot.

    16 –> So, theism is not at all like the caricature you would set up and knock over.

    >>The problem is that a UFC smacks too much of being just a ploy to get you out of an unpalatable alternative.>>

    17 –> Loaded language to set up and knock over yet another strawman.

    18 –> The issue is, why is there something — a world, with us as rational and morally governed beings — in it; rather than nothing. That brings up ground of being, and all sorts of issues, which are admittedly difficult and indeed abstruse, but — as Mathematics and Physics show — that is not the same as absurd.

    19 –> On considering such, God as a necessary being ground of being is a reasonable alternative, and one that will by the logic of such modes of being, will be either impossible or actual.

    20 –> Strawmannish distortions driven by failing to understand on its own terms, or rhetorical parodies rooted in such misunderstandings don’t work to shift that off the table.

    Of course, this is a side note for something important in its own right but not particularly relevant to the focus of this thread’s topic. I suggest going to the other thread if one wishes to debate this.

    KF

  11. 11
    scordova says:

    Thank you KF for highlighting my comment.

    long term maverick ID (and, I think, still YEC-sympathetic) supporter SalC

    I was a YEC-sympathetic OEC for years, but as of 2013 I’m now a professing YEC, but that is personal view not a scientific claim.

    I’m viewed as a maverick IDist because:

    1. I don’t think ID is science, even though I believe it is true, rather than absolute statements of truth or whether “ID is science”, the question of ID is better framed in terms like Pascal’s Wager.

    2. I don’t agree with 2LOT begin used in defense of ID

    3. I argue information theory in defense of ID is not as good as arguing basic probability

    4. The age of the fossil (really time of death of the fossils) is not strictly speaking a YEC issue, it is not an age-of-the-universe issue, it is a time of death issue. Empirical evidence strongly argues the fossil record is recent, not old. Short time frames favor ID over evolution, and hence evolutionary theory, as stated is likely wrong. The time of death of the fossils is recent and independent of the radiometric ages of the rocks they are buried in. A living dog today could be buried in 65 million year old rocks, it doesn’t imply the dog died 65 million years ago after we exhume it. Follow the evidence where it leads, and the evidence says the fossils died more recently than evolutionists claim. C14 traces are ubiquitous in the carboniferous era, and the best explanation is recency in the time of death of fossils.

    But back to the OP, should ID be defended by:

    1. Law of Large Numbers (LLN)
    2. Information theory
    3. 2nd Law of Thermodynamics (2LOT)

    I’ll argue by way of illustration. If we found 500 fair coins 100% heads, should and IDists argue against the chance hypothesis using:

    1. Law of Large numbers (the best approach, imho). Here is the law of large numbers:

    sample average converges in probability towards the expected value

    The expected value of a system of 500 fair coins in an uncertain configuration (like after flipping and/or shaking) is 50% heads. 100% heads is farthest from expectation, therefore we would rightly reject the chance hypothesis as an explanation. Simple, succinct, unassailable.

    2. Information Theory (like using a sledgehammer to swat flies, it will work if the sledgehammer is skillfully used — yikes!). Does all the fancy math and voluminous dissertations add force to the argument? I’ll let the information theory proponents offer their information arguments to reject the chance hypothesis. I won’t even try…

    3. 2nd Law of Thermodynamics (2LOT)

    Let me state the most widely accepted version of 2LOT as would be found in textbooks of physics, chemistry and engineering students:

    CLAUSIUS STATEMENT

    Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.

    I invite 2LOT proponents to use this definition of 2LOT to argue against the chance hypothesis for 500 fair coins 100% heads.

    If one wouldn’t used Information Theory or 2LOT for such a trivial “design” of 500 fair coins 100% heads, why then should such arguments be appropriate for substantially more complex designs? Behe’s Edge of Evoluion was a basic a probability argument. It works. It gets the point across.

    LLN seems to me a superior approach. The Humpty Dumpty argument is a subtle instance of the LLN arugment, it is the way I would defend the idea “life does not proceed from non-life” (to extrapolate Pastuer’s “Life comes from life” statement).

    The heart of the issues is which is the better approach for IDists:

    1. LLN
    2. Information Theory
    3. 2LOT

    I have argued LLN, imho, is the best on many levels.

  12. 12
    scordova says:

    Algorithmically controlled metabolisms (such as realized in life) are low multiplicity constructs as a matter of principle

    Let me try to explain in more lay terms. Syntactically and semantically coherent statements in English are an infinitesimally small space of the possible alphabetic sequences. There are multitudinously more ways to write syntactically and semantically nonsense statements than coherent statements, hence we could say:

    1. non-sense statements have high multiplicity
    2. coherent statements have low multiplicity

    According to LLN, random processes (like monkeys on a typewriter, figuratively speaking) will result in non-sense statements because non-sense statements have high multiplicity, and random process tend toward high multiplicity targets. Random processes will not result in coherent statements.

    A biological organism’s “software” operates with on a syntax and semantics that it defines for itself. Hence, some call life an example of an “algorithmically controlled metabolism”. In fact life implements an instance of the most complex linguistic construct in the universe, a highly complex Quine computer:
    http://en.wikipedia.org/wiki/Quine_(computing)

    Self-defined, complex language processing systems and complex quine computers occupy a infinitesimally low multiplicity configuration relative to the space of possible molecular configurations as a matter of principle. LLN says random processes will not then create the first life in this universe.

    Rather than appealing to LLN and the problems of multicplicity in the Origin of Life, Wells stated the problem in less technical terms in this way:

    Even if [Stanley] Miller’s experiment were valid, you’re still light years away from making life. It comes down to this. No matter how many molecules you can produce with early Earth conditions, plausible conditions, you’re still nowhere near producing a living cell, and here’s how I know. If I take a sterile test tube, and I put in it a little bit of fluid with just the right salts, just the right balance of acidity and alkalinity, just the right temperature, the perfect solution for a living cell, and I put in one living cell, this cell is alive – it has everything it needs for life. Now I take a sterile needle, and I poke that cell, and all its stuff leaks out into this test tube. We have in this nice little test tube all the molecules you need for a living cell – not just the pieces of the molecules, but the molecules themselves. And you cannot make a living cell out of them. You can’t put Humpty Dumpty back together again. So what makes you think that a few amino acids dissolved in the ocean are going to give you a living cell? It’s totally unrealistic.

  13. 13
    Mung says:

    kairosfocus:

    [–> this is of course equivalent to the string of yes/no questions required to specify the relevant “wiring diagram” for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002 . . . ]

    Instructive as always kf. And it doesn’t take a genius to make the connection between information theory and thermodynamics. One doesn’t even have to eat crow to see the connection. In fact, the notion has a long and distinguished history.

    I guess what puzzles me most is that Salvador, with all his multiple degrees, can’t make the connection.

  14. 14
    Mung says:

    Salvador:

    sample average converges in probability towards the expected value

    I guess now we need an explanation of sampling theory and of expected value and an explanation of why sample average converges in probability towards the expected value. I am sure you have one that’s simple.

    Wouldn’t introducing the concept of a probability distribution be simpler?

  15. 15
    Mung says:

    A fundamental approach to the theory of the properties of macroscopic matter has three aspects. First, there is the detailed characterization of the atomic states and structure in terms of the formalism of quantum mechanics. Second, there is the application of statistical considerations to these states; this is the subject matter of statistical mechanics. And, third, there is the development of the macroscopic consequences of the statistical theory, constituting the subject matter of thermodynamics.

    – Herbert B. Callen. Thermodynamics

  16. 16
    Mung says:

    Salvador:

    Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.

    Salvador:

    I invite 2LOT proponents to use this definition of 2LOT to argue against the chance hypothesis for 500 fair coins 100% heads.

    As has been pointed out to you repeatedly, the Clausius formulation is not the only formulation of the 2LOT.

    And again, as has been pointed out to you repeatedly, you have things exactly backwards.

    It’s all about probabilities, and the probabilities do not prevent the improbable from taking place. As improbable as it may seem, a tornado could indeed pass through a junkyard and leave behind a fully functioning 747.

    Hell might freeze over first, but “the laws of thermodynamics” do not prevent it.

  17. 17
    scordova says:

    Here are more ways the 2nd law is expressed.

    http://web.mit.edu/16.unified/.....ode37.html

    Any 2LOT IDist is invited to show how 500 fair coins 100% heads is not a product of chance based on any well-accepted definition of 2LOT.

    I showed how LLN can be used.

    100% heads is maximally far from the expectation of 50% heads, therefore we can reject chance as an explanation since 100% heads is inconsistent with LLN.

    Any 2LOT proponents care to state it in comparably succinct ways from well-accepted definitions of 2LOT?

    I don’t think definitions of 2LOT involving the term “disorder” should count in light of this material from a website at Occidental College by a respected educator in thermodynamics:

    http://entropysite.oxy.edu/

    2.”Disorder — A Cracked Crutch for Supporting Entropy Discussions” from the Journal of Chemical Education, Vol. 79, pp. 187-192, February 2002.

    “Entropy is disorder” is an archaic, misleading definition of entropy dating from the late 19th century before knowledge of molecular behavior, of quantum mechanics and molecular energy levels, or of the Third Law of thermodynamics. It seriously misleads beginning students, partly because “disorder” is a common word, partly because it has no scientific meaning in terms of energy or energy dispersal. Ten examples conclusively demonstrate the inadequacy of “disorder” in general chemistry.

    and

    April 2014

    The 36 Science Textbooks That Have Deleted “disorder” From Their Description of the Nature of Entropy

    (As advocated in the publications of Dr. Frank L. Lambert, Professor Emeritus, Chemistry, Occidental College.)

  18. 18
    Mung says:

    Salvador:

    Any 2LOT IDist is invited to show how 500 fair coins 100% heads is not a product of chance based on any well-accepted definition of 2LOT.

    Chance is not a cause, Salvador, and the 2LOT does not cause chance.

    Anyone but me see the irony in a Young Earth Creationist appealing to the law of large numbers?

  19. 19
    scordova says:

    Mung minces words:

    Chance is not a cause, Salvador,

    If you came across a table on which was set 500 fair coins and 100% displayed the “heads” side of the coin, how would you, using 2LOT, test “chance” as a hypothesis to explain this particular configuration of coins?

  20. 20
    niwrad says:

    scordova

    Any 2LOT IDist is invited to show how 500 fair coins 100% heads is not a product of chance based on any well-accepted definition of 2LOT.

    For 2nd_law_SM, systems spontaneously go to probable states.
    The 500-head output of a 500-coin flipping system is even the more improbable state of all 2^500 states, so such output cannot be a spontaneous result.

  21. 21
    scordova says:

    For 2nd_law_SM, systems spontaneously go to probable states.

    Where can the readers find this 2nd_law_SM formally stated in physics, chemistry, and engineering texts? If it’s your own construction, that’s fine, but it would be inappropriate to represent it as an accepted version of the 2nd law.

    Btw, if we are talking statistical mechanics and thermodynamics we are talking states that are thermodynamically related, not heads/tails related. Thermodynamic macrostates are defined by Temperature, Pressure, Volume, Internal Energy, Quantity of Particles, etc. not heads/tails of coin configurations. That’s why it’s called the 2nd law of thermodynamics, not Matzke’s law of 500 fair coins.

  22. 22
    niwrad says:

    scordova

    Here is where Sewell (and all other 2nd_law_SM IDers) disagree with you: the conception of statistical mechanics (SM).

    SM, as its name says, englobes as a tool statistics (and of course probability theory). So in a sense SM englobes also your LLN argument, but not viceversa.

    You say: we are talking statistical mechanics and thermodynamics we are talking states that are thermodynamically related, not heads/tails related. Thermodynamic macrostates are defined by Temperature, Pressure, Volume, Internal Energy, Quantity of Particles, etc. not heads/tails of coin configurations. That’s why it’s called the 2nd law of thermodynamics, not Matzke’s law of 500 fair coins.

    Here again we disagree. SM can deal with all large systems that have statistical behaviour. E.g. when SM is applied to quantum mechanics or information theory, heat, temperature, pressure, volume… are not the main issue. There they work with Boltzman constant equal 1 (number without physical measure). So in a sense SM has a larger application than mere classical heat thermodynamics. This distinction is the reason I always use the term “2nd_law_SM”, not symply the generic “2nd law”.

    I know you don’t want to acknowledge that, so you limit yourself to classical heat thermodynamics, but then you are necessarily forced to invent your LLN argument, when it could well be incorporated into our more general 2nd_law_SM ID argument. It is a bad situation that helps evolutionists only. Please reconsider your position and jump in the Sewell camp. You would be welcome.

  23. 23
    kairosfocus says:

    SalC & Mung:

    An interesting exchange; three mavericks together I’d say.

    I would note again, that for 100+ years, 2LOT has been inextricably interconnected with the statistical view at ultramicroscopic scale. It is in this context that probabilistic, informational and thermodynamics facets have been interconnected. To the point that to try to pull one aspect out in exclusion to the others is a hopeless exercise.

    What I agree with is that this stuff gets technical really fast so that only those with an adequate background should engage such in contexts where technicalities are likely to come out. You need to know enough statistics and background math, probability, classical and statistical thermodynamics and information theory to follow what is going on.

    That is a tall order and is basically calling for someone with an applied physics-engineering background with a focus on electronics and telecommunications. If you do not know what a Fermi level is or cannot address noise factor/figure or temperature, or the informational entropy of a source or why info gets a neg log probability metric, you probably don’t have enough. Likewise, if you do not know how S = k*log W [or better yet upper case Omega] means and how it comes to be that way, or the difference between a microstate and a macro state you don’t have enough. A good test is whether you can follow the arguments in chs 7 – 9 of Thaxton et al in TMLO.

    All of this is why I normally keep such matters as background. (It is only because they were put on the table again, that I have taken them up.)

    But that does not mean they should not be done.

    They should.

    You will also notice that I never of my own accord talk in terms of 2LOT forbids X in isolation. I will speak in terms of the statistical mechanical underpinnings that ground it, notice for instance my discussion App 1 my briefing note, where I go straight to a model that is statistical, though discussed qualitatively:

    http://www.angelfire.com/pro/k.....tm#thermod

    And I will imply or outright address fluctuations and point out that beyond a certain modest scale, large fluctuations from thermodynamic equilibrium resting on relative statistical weight of clusters of microstates, will be so overwhelmed statistically that they are maximally unlikely to be observed on the gamut of sol system or observed cosmos.

    Even the now habitual insistence on observed cosmos is a technically backed point; scientifically and philosophically. Likewise things like speaking of biological or cell based life, etc.

    Now, taking 500 coins on a sol sys scale, and tossing at random, it is easy to see the configuration space is 2^500 possibilities. Whilst if each of the 10^57 atoms in the sol system were made into an observer and were given an array of 500 coins, flipped, examined and recorded every 10^-13 or 10^-14 s (fast atomic chem interaction rates) for 10^17 s [big bang linked] we would be looking at sampling say 10^ [13 + 17 + 57] = 10^87 possibilities, with replacement. Taking that as a straw, we could represent 3.27*10^150 possibilities for 500 coins as a cubical haystack comparably thick as our galaxy, several hundred LY.

    Blind needle in haystack search at that level is negligibly different from no search of consequence and we would have no right to expect to pick up any reasonably isolated zones in the config space. Too much stack, too few needles, far too little search.

    Where, as the Wicken wiring diagram approach to understanding what FSCO/I shows, functional configs will be very tightly constrained and so will come in isolated zones, islands of function. Shake up a bag of Penn International 50 parts in a bag as long as you please, you will not build a functional reel.

    500 H or similar configs fall in that context: specific, simply describable or observable as such, tightly constrained as to accept/reject. Maximally unlikely to be observed by blind needle in stack search, in a context where the overwhelming bulk of possibilities will be in the cluster near 50-50, in no particular order.

    Likewise hitting on a string that gives 72 characters in ASCII code in contextualy relevant English text is maximally unlikely by such blind search. It matters not whether pure chance or chance plus necessity [biased chance or chance with directional drift . . . ], so long as genuinely blind.

    Where, all of this will be instantly familiar, and uses very familiar themes that are culturally broadly accessible.

    Save, that the idea of a configuration space [a cut down phase space with momentum left off, i.e. a state space] will need explanation or illustration. Likewise the principle that a structured set of Y/N q’s — a description language — can specify the wiring diagram for an entity. But, reference to what AutoCAD etc do will instantly provide context.

    The first direct implication is that once such is understood, the OOL challenge moves to focus.

    The root of the tree of life.

    Start with physics and chemistry in a pond or the like and get to viable architectures for life in plausible and empirically, observationally warranted steps. That is, as OOL is temporally inaccessible, show causal adequacy of proposed blind watchmaker mechanisms in the here and now; as a logical and epistemologically controlled restraint on ideological speculation.

    Silence.

    Next, address the issue of OO body plans, let’s begin to just use OOBP. This is the relevant macroevolutionary context.

    The trend is to want to get away with extrapolations from micro changes and/or to smuggle in the notion that it’s all within a continent of incrementally accessible function, with smoothly accessible fitness peaks.

    So, the matter pivots on pointing out the reality of islands of function, with search challenges even more constrained as we now deal with Earth’s biosphere. Where, again, FSCO/I comes in islands deeply isolated in config spaces. Proteins in AA sequence space being a good study example.

    All of this has been done for years.

    Indeed, here is Meyer in his 2004 article in PBSW, which was drowned out by pushing an artificial sea of controversy and career busting:

    One way to estimate the amount of new CSI that appeared with the Cambrian animals is to count the number of new cell types that emerged with them (Valentine 1995:91-93) . . . the more complex animals that appeared in the Cambrian (e.g., arthropods) would have required fifty or more cell types . . . New cell types require many new and specialized proteins. New proteins, in turn, require new genetic information. Thus an increase in the number of cell types implies (at a minimum) a considerable increase in the amount of specified genetic information. Molecular biologists have recently estimated that a minimally complex single-celled organism would require between 318 and 562 kilobase pairs of DNA to produce the proteins necessary to maintain life (Koonin 2000). More complex single cells might require upward of a million base pairs. Yet to build the proteins necessary to sustain a complex arthropod such as a trilobite would require orders of magnitude more coding instructions. The genome size of a modern arthropod, the fruitfly Drosophila melanogaster, is approximately 180 million base pairs (Gerhart & Kirschner 1997:121, Adams et al. 2000). Transitions from a single cell to colonies of cells to complex animals represent significant (and, in principle, measurable) increases in CSI . . . .

    In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types, but also for the origin of new body plans . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes–the very stuff of macroevolution–apparently do not vary. In other words, mutations of the kind that macroevolution doesn’t need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don’t occur.

    Do they have a good answer to this, one backed by vera causa?

    Nope.

    No more, than to this from Loennig of Max Planck Institute [a Jehovah’s Witness, BTW . . . ], in his 2004 equally peer reviewed presentation, on “Dynamic genomes, morphological stasis, and the origin of irreducible complexity”:

    . . . examples like the horseshoe crab [~250 MY fossil morphology stasis] are by no means rare exceptions from the rule of gradually evolving life forms . . . In fact, we are literally surrounded by ‘living fossils’ in the present world of organisms when applying the term more inclusively as “an existing species whose similarity to ancient ancestral species indicates that very few morphological changes have occurred over a long period of geological time” [85] . . . . Now, since all these “old features”, morphologically as well as molecularly, are still with us, the basic genetical questions should be addressed in the face of all the dynamic features of ever reshuffling and rearranging, shifting genomes, (a) why are these characters stable at all and (b) how is it possible to derive stable features from any given plant or animal species by mutations in their genomes? . . . .

    A first hint for answering the questions . . . is perhaps also provided by Charles Darwin himself when he suggested the following sufficiency test for his theory [16]: “If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.” . . . Biochemist Michael J. Behe [5] has refined Darwin’s statement by introducing and defining his concept of “irreducibly complex systems”, specifying: “By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning” . . . [for example] (1) the cilium, (2) the bacterial flagellum with filament, hook and motor embedded in the membranes and cell wall and (3) the biochemistry of blood clotting in humans . . . .

    One point is clear: granted that there are indeed many systems and/or correlated subsystems in biology, which have to be classified as irreducibly complex and that such systems are essentially involved in the formation of morphological characters of organisms, this would explain both, the regular abrupt appearance of new forms in the fossil record as well as their constancy over enormous periods of time. For, if “several well-matched, interacting parts that contribute to the basic function” are necessary for biochemical and/or anatomical systems to exist as functioning systems at all (because “the removal of any one of the parts causes the system to effectively cease functioning”) such systems have to (1) originate in a non-gradual manner and (2) must remain constant as long as they are reproduced and exist. And this could mean no less than the enormous time periods mentioned for all the living fossils hinted at above. Moreover, an additional phenomenon would also be explained: (3) the equally abrupt disappearance of so many life forms in earth history . . . The reason why irreducibly complex systems would also behave in accord with point (3) is also nearly self-evident: if environmental conditions deteriorate so much for certain life forms (defined and specified by systems and/or subsystems of irreducible complexity), so that their very existence be in question, they could only adapt by integrating further correspondingly specified and useful parts into their overall organization, which prima facie could be an improbable process — or perish . . . .

    According to Behe and several other authors [5-7, 21-23, 53-60, 68, 86] the only adequate hypothesis so far known for the origin of irreducibly complex systems is intelligent design (ID) . . . in connection with Dembski’s criterion of specified complexity . . . . “For something to exhibit specified complexity therefore means that it matches a conditionally independent pattern (i.e., specification) of low specificational complexity, but where the event corresponding to that pattern has a probability less than the universal probability bound and therefore high probabilistic complexity” [23]. For instance, regarding the origin of the bacterial flagellum, Dembski calculated a probability of 10^-234[22].

    Has such evidence of islands of irreducibly complex function been adequately answered?

    Nope, again.

    So, why is it we still see the sort of hot disputes that pop up in and around UD, given that the only empirically demonstrated adequate cause of FSCO/I is intelligently directed configuration? And, the needle in haystack blind search challenge readily explains why?

    That is, per inductive inference to best current explanation FSCO/I is a highly reliable sign of design.

    A clue, is that objectors then zero in and try to throw up reasons to dismiss or ignore FSCO/I as real, relevant and recognised. Never mind that the concept is readily demonstrated, is directly recognisable per the wiring diagram pattern, is discernible in writings of leading ID thinkers and demonstrably traces to Orgel and Wicken in the 1970’s. In fact, it started out as just a handy way to abbreviate a descriptive phrase.

    We are not dealing with dialogue constrained by mutual respect for one another, for first principles of reason and for objective assessment of evidence and warrant.

    We face a dirty ideological war, and a case where quite literally the stronger the argument, the more strident and ruthless the objections and obfuscatory talking points.

    So, we simply will have to lay out our case at more accessible and more technical levels, demonstrating adequate warrant.

    And stand.

    And hold our ground in the face of ruthless and too often uncivil behaviour.

    We are dealing with those who — quite correctly — view what we have stood for as a dangerous threat to their ideological schemes and linked sociocultural agendas. Agendas shaped by the sheer amorality and radical relativism of a priori evolutionary materialism.

    So, there is deep polarisation, there is rage, there is ruthlessness, there is even outright nihilism.

    But, we must stay the course, as if science is not rescued from such ideologisation, what is left of rationality in our civilisation will collapse. With fatal consequences.

    If you doubt me on this, simply observe how ever so many of the more determined objectors are perfectly willing to burn down the house of reason to preserve their ideology.

    A very bad sign indeed.

    It is kairos . . . time to stand and be counted.

    Only that will in the end count.

    And, coming back full circle, at popular level, the discussion pivots on configuration, FSCO/I and blind needle in haystack search. But, that needs to have technical backbone, which will pivot on the three interacting perspectives, facets of a whole.

    But those who speak to such things need to understand what they are dealing with.

    And, on thermodynamics specifically, the matter focusses first on the statistical underpinnings of 2LOT, and the need for relevant concurrent flows of energy, mass and info coupled to energy converters and constructors that create FSCO/I rich entities. Where, the informational perspective that entropy is best seen as a metric of average missing info to specify microstate given macrostate defined at observable level, becomes pivotal. For, that is cloely linked to clustering of microstates, relative statistical weight of clusters and spontaneous change tendencies, etc.

    From this, we can see why FSCO/I takkes on the significance it does.

    But, we must be realistic: the stronger the argument, the more determined, ruthless and utterly closed inded will be the objections from committed ideologues. And those who,look to their leadership will blindly parrot the talking points that have been drummed into them.

    One of these, patently, is the substitution of irrelevant flows in the open system compensation argument.

    Which, is why, again, I highlight Clausius as you cited:

    Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.

    KF

    PS: I again note that in L K Nash’s classical introduction to statistical thermodynamics, the example of a 500 or 1,000 or so coin system is used as the key introductory example.

  24. 24
    kairosfocus says:

    Niw, I agree. KF

  25. 25
    scordova says:

    Here is where Sewell (and all other 2nd_law_SM IDers) disagree with you: the conception of statistical mechanics (SM).

    SM, as its name says, englobes as a tool statistics (and of course probability theory). So in a sense SM englobes also your LLN argument, but not viceversa.

    So where then will I find 2nd_law_SM in textbook or even journaled physics? I provided at least 5 acceptable versions of the 2nd law that can be easily found in Universities or Professional literature, none of them say what your 2nd_law_SM says. What you have said is vague description of statistical mechanics, it IS NOT the 2nd law of thermodynamics.

    For the reader’s benefit, thermodynamics is only one branch of statistical mechanics.

    Here is the wiki description of Statistical Mechanics. Let the reader note, there is no “law of statistical mechanics”:

    Statistical mechanics is a branch of theoretical physics and chemistry (and mathematical physics) that studies, using probability theory, the average behaviour of a mechanical system where the state of the system is uncertain
    ….
    common use of statistical mechanics is in explaining the thermodynamic behaviour of large systems. Microscopic mechanical laws do not contain concepts such as temperature, heat, or entropy, however, statistical mechanics shows how these concepts arise from the natural uncertainty that arises about the state of a system when that system is prepared in practice. The benefit of using statistical mechanics is that it provides exact methods to connect thermodynamic quantities (such as heat capacity) to microscopic behaviour, whereas in classical thermodynamics the only available option would be to just measure and tabulate such quantities for various materials. Statistical mechanics also makes it possible to extend the laws of thermodynamics to cases which are not considered in classical thermodynamics, for example microscopic systems and other mechanical systems with few degrees of freedom.[1] This branch of statistical mechanics which treats and extends classical thermodynamics is known as statistical thermodynamics or equilibrium statistical mechanics.

    Statistical mechanics also finds use outside equilibrium. An important subbranch known as non-equilibrium statistical mechanics deals with the issue of microscopically modelling the speed of irreversible processes that are driven by imbalances. Examples of such processes include chemical reactions, or flows of particles and heat. Unlike with equilibrium, there is no exact formalism that applies to non-equilibrium statistical mechanics in general and so this branch of statistical mechanics remains an active area of theoretical research.

    2nd_law_SM is nowhere to be found. I don’t think it is appropriate to represent 2nd_law_SM as some sort of accepted version of the 2nd law of thermodynamics. It is not, it’s not even theromodynamics. “2nd_law_SM” deals with systems not describable by thermodynamic state variables such as: Temperature, Volume, Number of Particles, Pressure. It is therefore not a thermodynamic law. If you say “random process will lead to most probable state” that is another restatement of LLN, it is not the 2nd law of thermodynamics.

    The reader can try googling 2nd_law_SM or “laws of statistical mechanics”. You won’t find too many matches
    to that because it is an idiosyncratic construction, it is not the 2nd law of thermodynamics which is simply stated by CLAUSIUS:

    Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.

    or KELVIN:

    It is impossible, by means of inanimate material agency, to derive mechanical effect from any portion of matter by cooling it below the temperature of the coldest of the surrounding objects

    I don’t think it is too much to ask for a statement of 2nd_law_SM and where it’s been codified by professionals in the field like say, Dr. Lambert. If there is no such thing, it is only appropriate to say it is a construction unique to UD, and further, it is inappropriate to try to represent it as some sort of accepted version of the 2nd law of thermodynamics.

    Call it Niwrad_KF_law_of_thermodynamics, that would be more accurate, don’t call it the 2nd law of Statistical Mechanics, because there is no such thing. There might be derivations of the 2nd of thermodynamics from statistical mechanics, but that is not the same as Niwrad_KF_law_of_thermodynamics or Niwrad_KF_2nd_law_of_statistical_mechanics.

  26. 26
    kairosfocus says:

    SalC:

    Pardon, but this begins to look rather unnecessary, given what has been discussed above and in OP as well as elsewhere:

    where then will I find 2nd_law_SM in textbook or even journaled physics?

    Niw’s material point is that the 2nd law has a statistical foundation; one established for over a hundred years. Namely, the spontaneous direction of change in a system free to evolve is towards dominant clusters of microstates, in which distribution of mass and energy at micro levels will move towards higher numbers of possibilities.

    That point is valid and should be acknowledged first and foremost.

    Thereafter any infelicities of phrasing (for one for whom English is a distinct 2nd or more language) can be addressed.

    Nor am I doing anything so pretentious as proposing to put forward a new law of Science, I have been repeatedly careful to point to 100+ years of foundational stat mech that elucidates the roots of 2LOT, and brings forth the statistical underpinnings. Which, I distinctly recall, was taught to me long since when I studied the subject. Such is also, as well you know or should know, a commonplace in thermodynamics education. It goes so far back that I would be surprised to see it in journals of any currency or recency. It will instead be in the classics by Gibbs, Boltzmann and co, and it will be part of the general discussion; indeed it is a consequence of core principles of stat mech, but is of great empirical significance.

    BTW, likewise the first law of motion, strictly is a trivial consequence of the second as once we set F = 0 a must be 0, but is of great conceptual and real world significance.

    A similar pattern obtains with stat mech underpinnings and the 2nd law; which, recall, was first effectively empirically identified, by several different workers.

    For example here is Wiki on the 2nd law, in a subsection with an interesting title:

    Derivation from statistical mechanics
    Further information: H-theorem

    Due to Loschmidt’s paradox, derivations of the Second Law have to make an assumption regarding the past, namely that the system is uncorrelated at some time in the past; this allows for simple probabilistic treatment. This assumption is usually thought as a boundary condition, and thus the second Law is ultimately a consequence of the initial conditions somewhere in the past, probably at the beginning of the universe (the Big Bang), though other scenarios have also been suggested.[54][55][56]

    Given these assumptions, in statistical mechanics, the Second Law is not a postulate, rather it is a consequence of the fundamental postulate, also known as the equal prior probability postulate, so long as one is clear that simple probability arguments are applied only to the future, while for the past there are auxiliary sources of information which tell us that it was low entropy.[citation needed] The first part of the second law, which states that the entropy of a thermally isolated system can only increase, is a trivial consequence of the equal prior probability postulate, if we restrict the notion of the entropy to systems in thermal equilibrium . . . .

    Suppose we have an isolated system whose macroscopic state is specified by a number of variables. These macroscopic variables can, e.g., refer to the total volume, the positions of pistons in the system, etc. Then W will depend on the values of these variables. If a variable is not fixed, (e.g. we do not clamp a piston in a certain position), then because all the accessible states are equally likely in equilibrium, the free variable in equilibrium will be such that W is maximized as that is the most probable situation in equilibrium.

    If the variable was initially fixed to some value then upon release and when the new equilibrium has been reached, the fact the variable will adjust itself so that W is maximized, implies that the entropy will have increased or it will have stayed the same (if the value at which the variable was fixed happened to be the equilibrium value). Suppose we start from an equilibrium situation and we suddenly remove a constraint on a variable. Then right after we do this, there are a number W of accessible microstates, but equilibrium has not yet been reached, so the actual probabilities of the system being in some accessible state are not yet equal to the prior probability of 1/W. We have already seen that in the final equilibrium state, the entropy will have increased or have stayed the same relative to the previous equilibrium state . . . .

    The second part of the Second Law states that the entropy change of a system undergoing a reversible process is given by:

    dS = d’Q/T

    where the temperature is defined as:

    1/k*T = BETA = d ln [W(E)]/ dE

    [ . . . after derivation (follow the link) . . . dS will be as expected]

    Other sources will run along much the same lines. 2LOT is underpinned by statistical thermodynamics.

    So, to try to set up a strawman and attach Niw’s and my name to it is a case of implicit ad hominem abusive by strawman caricature.

    Kindly, stop it.

    As for textbook summaries, a simple one is right there in my linked note, as is cited at 5 above:

    Yavorski and Pinski, in the textbook Physics, Vol I [MIR, USSR, 1974, pp. 279 ff.], summarise the key implication of the macro-state and micro-state view well: as we consider a simple model of diffusion, let us think of ten white and ten black balls in two rows in a container. There is of course but one way in which there are ten whites in the top row; the balls of any one colour being for our purposes identical. But on shuffling, there are 63,504 ways to arrange five each of black and white balls in the two rows, and 6-4 distributions may occur in two ways, each with 44,100 alternatives. So, if we for the moment see the set of balls as circulating among the various different possible arrangements at random, and spending about the same time in each possible state on average, the time the system spends in any given state will be proportionate to the relative number of ways that state may be achieved. Immediately, we see that the system will gravitate towards the cluster of more evenly distributed states. In short, we have just seen that there is a natural trend of change at random, towards the more thermodynamically probable macrostates, i.e the ones with higher statistical weights. So “[b]y comparing the [thermodynamic] probabilities of two states of a thermodynamic system, we can establish at once the direction of the process that is [spontaneously] feasible in the given system. It will correspond to a transition from a less probable to a more probable state.” [p. 284.] This is in effect the statistical form of the 2nd law of thermodynamics. Thus, too, the behaviour of the Clausius isolated system above [with interacting sub-systemd A and B that transfer d’Q to B due to temp. difference] is readily understood: importing d’Q of random molecular energy so far increases the number of ways energy can be distributed at micro-scale in B, that the resulting rise in B’s entropy swamps the fall in A’s entropy. Moreover, given that [FSCO/I]-rich micro-arrangements are relatively rare in the set of possible arrangements, we can also see why it is hard to account for the origin of such states by spontaneous processes in the scope of the observable universe. (Of course, since it is as a rule very inconvenient to work in terms of statistical weights of macrostates [i.e W], we instead move to entropy, through s = k ln W. Part of how this is done can be seen by imagining a system in which there are W ways accessible, and imagining a partition into parts 1 and 2. W = W1*W2, as for each arrangement in 1 all accessible arrangements in 2 are possible and vice versa, but it is far more convenient to have an additive measure, i.e we need to go to logs. The constant of proportionality, k, is the famous Boltzmann constant and is in effect the universal gas constant, R, on a per molecule basis, i.e we divide R by the Avogadro Number, NA, to get: k = R/NA. The two approaches to entropy, by Clausius, and Boltzmann, of course, correspond. In real-world systems of any significant scale, the relative statistical weights are usually so disproportionate, that the classical observation that entropy naturally tends to increase, is readily apparent.)

    The basic point should be plain: 2LOT is closely tied to the statistical analysis, and that same analysis is what highlights the problems with the notion of trying to resort to irrelevant energy and mass flows as though they provide compensation rendering 2LOT a non-problem.

    KF

  27. 27
    niwrad says:

    kairosfocus, well, yes there is not a law of physics with our name 🙁 but since according to scordova all physics textbooks say that systems go toward improbability may be you and I together at least could send a mail to ask an errata corrige no? 🙂 😉

  28. 28
    scordova says:

    Pardon, but this begins to look rather unnecessary

    Pardon, you give a 4,458 word OP and yet are unwilling to state the actual version of the 2nd Law you are working from.

    Please state the 2nd law you are working from and then make a deduction based on the 2nd law that a random process will not result in 500 fair coins 100% heads.

    The problem is that 2nd law is restricted to only certain kinds of microstates and macrostates, wherease the LLN is not.

    So, to try to set up a strawman and attach Niw’s and my name to it is a case of implicit ad hominem abusive by strawman caricature.

    Neither you nor Niw have given an accepted definition of the 2nd law, it is inappropriate to call it 2nd_law_SM as if is an actually accepted law by physicists, chemists and engineers.

    In actuality it is Niwrad_KFs_2nd_law_of_statistical_mechanics. It is not the 2nd law of thermodyanamics. To insist 2nd_law_SM is the second law of thermodynamics is an equivocation.

    State the 2nd_law_SM, and then cite where it is accepted as a definition in practice. If it isn’t a real law in the literature, it would be best to stop pretending it is.

    Frankly I wouldn’t want the next generation of IDists being exposed and taught equivocations rather than the real thing, and the real thing is:

    KELVIN STATEMENT OF 2nd LAW OF THERMODYNAMICS

    It is impossible, by means of inanimate material agency, to derive mechanical effect from any portion of matter by cooling it below the temperature of the coldest of the surrounding objects

  29. 29
    groovamos says:

    scordova: I don’t think ID is science, even though I believe it is true….

    So a project to systematically overturn an entrenched view held by scientists, who consider such a view as scientific, cannot be itself scientific if successful? How about the man, (who was not what I would consider) a scientist in the 19th century, creating a ‘scientific’ viewpoint akin to a substitute religion is going to be refuted by anything other than what one could call a science? A project which is subjected to a would-be scientific take down in the journal Science cannot have any scientific basis? Maybe you can precisely delineate for us your categorization scheme to be able to define ‘science’ in your view, please.

    Maybe even look at it this way. I as a holder of an advance degree in a STEM discipline am constantly evaluating my debate strategy with the Darwinian opposition, and my evaluations are constantly informed by my training in order to take these guys and their purported story down. So if you are similarly trained as myself and are not doing the same, please explain. Remember the opposition is not letting up in their ‘scientific’ stance to debate us, so what good does it do to give up the ‘scientific’ moniker to people who can’t seem to ignore us?

  30. 30
    kairosfocus says:

    SalC: it seems clear that you have neither read nor acknowledged the mere existence of the point I just made to extend previous remarks about the statistical thermodynamics underpinnings of 2LOT; which includes a relevant citation and onward link to a source not noted for sympathy to ID but which on such technical matters has to acknowledge certain facts. You then proceeded to reiterate an ad hominem abusive name assignment. This is not good enough, and I think you owe us a fairly serious explanation for your behaviour. KF

  31. 31
    scordova says:

    statistical thermodynamics underpinnings of 2LOT

    So Statistical mechanics underpins 2LOT, and statistical mechanics encompasses and is under pinned by:

    1. fundamental theorem of calculus
    2. Newton’s 2nd law
    3. Hamiltonians principle
    4. Lioville Theroem
    5. Law of Large Numbers
    6. Quantum Mechanics (including Shrodinger’s Equation)
    7. Classical Mechanics
    8. General Relativity (statistical mechanics of say the
    Bekenstein)
    9. Electro magnetic theory
    etc. etc.

    Does that justify saying 2nd_law_SM should be used to show trajectory of rocket ships since according to 2nd_law_SM it would “englobe” all sorts of existing bodies of physics and mathematics including Newtow’s 2nd law (since Newton’s 2nd law of Motion is necessary for the Gibbs type derivation statistical mechanical entropy through the Lioville theorem)? Of course not. Would we use the 2nd_law_sm (which supposedly englobes general relativity since the most general 2nd law involves Bekenstein which involves general relativity) would we use it to prove gravitational lensing? Of course not!

    So it is just as inappropriate to say, just because statistical mechanics uses the Law of Large Numbers (LLN), that somehow 2nd_law_SM proves situations where LLN would be used to analyze 500 fair coins 100% heads.

    it seems clear that you have neither read nor acknowledged the mere existence of the point I just made to extend previous remarks about the statistical thermodynamics underpinnings of 2LOT;

    Wrong. It is clear you don’t acknowledge that just because 2LOT might be approximately derived in part from Newtonian/Classical Mechanics as Gibbs demonstrated, you can’t use 2LOT as the basis of Classical Mechanics. In like manner, you can’t use 2LOT to justify statistical mechanics nor the Law of Large numbers, nor 500 fair coins being 100% heads.

    The only place such invalid backward inferences are allowable is in Niwrad_KF_2nd_law_of_statistical_mechanics. It’s not valid physics nor logic.

    I think you owe us a fairly serious explanation for your behaviour.

    I don’t want IDists to think Niwrad_KF_2nd_law_of_statistical_mechanics is a correct representation of the 2nd law of thermodynamics.

    Btw, you’ve yet to state what 2nd_law_sm in any clear, concise, understandable and usable way. Here, I’ll give you a paraphrase from what I’ve been reading:

    The 2nd_law_SM states that:

    All the knowledge that statistical mechanics is founded on is valid. That foundational knowledge upon which statistical mechanics is built can be used to demonstrate whatever can be demonstrated by said foundational knowledge.

    That said knowledge englobes, but is not limited to all the collected knowledge of:

    1. Mathematics
    2. Physics
    3. Chemistry
    4. Cybernetics

    and pretty much anything that can be used by the discipline of statistical mechanics.

    That’s about all that you’ve really said, which is rather vacuous as a law.

    If you don’t like that paraphrase, provide for the reader a workable definition of 2nd_law_sm.

  32. 32
    kairosfocus says:

    SalC: I still do not see a substantial response to some fairly specific information already posted in 26 above, that shows exactly how stat Mech undergirds 2LOT in ways that then apply directly to the issue of origin of FSCO/I. You are continuing to play at tag and dismiss, inappropriately. Is that an admission, by implication? Please, do better. KF

  33. 33
    Querius says:

    Kairosfocus,

    First of all, thank you for a well-researched, well-thought out, and wonderfully articulated article! And thank you for taking the time and effort to put it together.

    Here are some thoughts inspired by your article and comments.

    * Measuring information and design quantitatively – This is very hard to do, perhaps impossible. I’ve never been comfortable with the idea. For example, a large computer program does not necessarily require more information or more design than a small one. In fact, a small one that efficiently meets requirements can be extremely challenging to design. Requirements for human manufacturing are often evident in design for manufacturability, design for low cost, design for common parts, design for low tolerances, design for ruggedness, design for maintainability (or not), and so on. This might not be self-evident. For example, which is a “better” design, a deciduous leaf or an evergreen leaf? Which one is more expensive to for an organism to manufacture?

    * Complexity and probability – What’s the chance of the design-free origin of a Boeing 777 or a city? The complex chemical cycles, codes, and physical operations within a cell are billions and billions and billions of times more complex than anything humans ever created. A mere 13.7 billion years is not remotely enough time to account for life anywhere in the universe. Irreducible complexity challenges the notion that incredibly complex, self- assembling, self-replicating, and self-repairing systems can derive from random collections of parts by tiny steps, each of which results in a more persistent configuration than being separate.

    * Existence – The instantaneous and simultaneous origin of time, mass-energy, and spatial dimensions cannot be explained by pre-existing physical laws; probability doesn’t exist before time; and the instantaneous ex nihilo existence of the universe is a miracle, not science. There’s no other option outside of wishful thinking.

    * On God I – The same people who choke on the question of what created God easily swallow the existence of the multiverse.

    * On God II – The same people who rage against God for allowing “evil” will then themselves perform “evil” things when they feel like it without remorse and without rage.

    * On God III – The people who criticize and then deny the existence of God, first project a ridiculously simplistic anthropomorphic caricature of Him as a straw man, while denying the ridiculously obvious possibility that they might not be able to comprehend God. Thus, the denial of God is not an intellectual objection, but rather an objection of the will.

    Thanks again for your work!

    -Q

  34. 34
    scordova says:

    SalC: I still do not see a substantial response to some fairly specific information already posted in 26 above, that shows exactly how stat Mech undergirds 2LOT in ways that then apply directly to the issue of origin of FSCO/I. You are continuing to play at tag and dismiss, inappropriately. Is that an admission, by implication? Please, do better. KF

    The equal prior probability postulate in formulation of the 2nd law applies to thermodynamic micrsotates, not microstates related to FCSO/I states. So in what textbook is FSCO/I a thermodynamic state? I pointed out you can’t equivocate thermodynamic states with states of interest to IDists. 500 fair coins 100% heads is a state of hypothetical interest to IDists. the heads/tails microstates obeys the equal probability postulate, but heads/tails microstates aren’t thermodynamic microstates. Just because heads/tails microstates obey the equal probability postulate, doesn’t mean you can apply the 2nd law of thermodynamics to it!

    And again, where is the statement of 2nd_law_SM?

    Please, do better. KF

    Quit pretending your equivocations are actual arguments. Heads/tails configurations aren’t thermodynamic microstates, and neither are more complicated design states generally thermodynamic microstates.

  35. 35
    scordova says:

    in statistical mechanics, the Second Law is not a postulate, rather it is a consequence of the fundamental postulate, also known as the equal prior probability postulate

    Ah yes, statistical mechanics is founded on the “equal prior probability postulate”. Since the 2nd law can proceed from the equal prior probability postulate, does that imply “the 2nd law is then applicable to the equal prior probability of each of the possible 500 fair coins configurations”? No!

    To say so would be as backward and invalid as saying the 2nd law of thermodynamics proves Rolle’s Theorem because the 2nd law utilizes calculus.

    Equal prior probability applies to a system of 500 fair coins because the coins are fair, not because of the 2nd law of thermodynamics! The statistics of the 2^500 possible heads/tails microstates proceeds from the coins being fair, not because of the 2nd law of thermodynamics.

  36. 36
    scordova says:

    groovamos:

    Maybe you can precisely delineate for us your categorization scheme to be able to define ‘science’ in your view, please.

    Science is observation and formulation of falsifiable hypotheses, and hypothesis testing via experiments.

    I think many of the claims of history are true, like George Washington was the first president. But such truth claims are not the soul of science even though they are true claims.

    The one part of ID that is scientific and testable is rejection of chance and law as an explanation. That’s good enough for me, and that is scientific.

    The inference to an intelligent designer, the “I” in ID however, is formally only an inference. ID strictly speaking, absent the Designer himself, seems outside of experimental science. But that is my opinion, not those of other IDists.

    For the record, I believe ID is true, but I’m not eager to call it science. Showing the chance and law cannot make certain designs like algorithmic metabolisms, that is science.

  37. 37
    bornagain77 says:

    As to Sal’s contention:

    “I’ve argued against using information theory type arguments in defense of ID, it adds way too much confusion. Basic probability will do the job, and basic probability is clear and unassailable.”

    Whilst I would agree with the overall sentiment of that statement, I must point out that information has now been physically measured and shown to have a ‘thermodynamic content’:

    While neo-Darwinian evolution has no evidence that material processes can generate functional, i.e. prescriptive, information, it is now shown that information introduced by the knowledge of an intelligence can ‘locally’ violate the second law and generate potential energy:

    Maxwell’s demon demonstration (knowledge of a particle’s position) turns information into energy – November 2010
    Excerpt: Scientists in Japan are the first to have succeeded in converting information into free energy in an experiment that verifies the “Maxwell demon” thought experiment devised in 1867.,,, In Maxwell’s thought experiment the demon creates a temperature difference simply from information about the gas molecule temperatures and without transferring any energy directly to them.,,, Until now, demonstrating the conversion of information to energy has been elusive, but University of Tokyo physicist Masaki Sano and colleagues have succeeded in demonstrating it in a nano-scale experiment. In a paper published in Nature Physics they describe how they coaxed a Brownian particle to travel upwards on a “spiral-staircase-like” potential energy created by an electric field solely on the basis of information on its location. As the particle traveled up the staircase it gained energy from moving to an area of higher potential, and the team was able to measure precisely how much energy had been converted from information.
    http://www.physorg.com/news/20.....nergy.html

    Demonic device converts information to energy – 2010
    Excerpt: “This is a beautiful experimental demonstration that information has a thermodynamic content,” says Christopher Jarzynski, a statistical chemist at the University of Maryland in College Park. In 1997, Jarzynski formulated an equation to define the amount of energy that could theoretically be converted from a unit of information2; the work by Sano and his team has now confirmed this equation. “This tells us something new about how the laws of thermodynamics work on the microscopic scale,” says Jarzynski.
    http://www.scientificamerican......rts-inform

    The importance of this experiment cannot be understated. Materialists hold that information is not physically real but is merely ’emergent’ from a material basis.
    In fact, in the past I have had, and many others on UD have had, debates with materialists defending the fact that the information in the cell is not simply a metaphor but is in fact real.,, More than once I have used the following reference to refute the ‘information is just a metaphor’ claim of materialists:

    Information Theory, Evolution, and the Origin of Life – Hubert P. Yockey, 2005
    Excerpt: “Information, transcription, translation, code, redundancy, synonymous, messenger, editing, and proofreading are all appropriate terms in biology. They take their meaning from information theory (Shannon, 1948) and are not synonyms, metaphors, or analogies.”
    http://www.cambridge.org/catal.....038;ss=exc

    Thus the fact that information is now found to have a ‘thermodynamic content’ and to be physically real is of no small importance.

    Moreover, the finding that information has a ‘thermodynamic content’, and is thus physically real, directly supports Andy C. McIntosh’s claim about information and the thermodynamics of the cell.

    Specifically, Andy C. McIntosh, professor of thermodynamics and combustion theory at the University of Leeds, holds that non-material information is what is constraining the cell to be is such a extremely high thermodynamic non-equilibrium state. Moreover, Dr. McIntosh holds that regarding information to be independent of energy and matter, instead of emergent from energy and matter, ‘resolves the thermodynamic issues and invokes the correct paradigm for understanding the vital area of thermodynamic/organisational interactions’.

    Information and Thermodynamics in Living Systems – Andy C. McIntosh – 2013
    Excerpt: ,,, information is in fact non-material and that the coded information systems (such as, but not restricted to the coding of DNA in all living systems) is not defined at all by the biochemistry or physics of the molecules used to store the data. Rather than matter and energy defining the information sitting on the polymers of life, this approach posits that the reverse is in fact the case. Information has its definition outside the matter and energy on which it sits, and furthermore constrains it to operate in a highly non-equilibrium thermodynamic environment. This proposal resolves the thermodynamic issues and invokes the correct paradigm for understanding the vital area of thermodynamic/organisational interactions, which despite the efforts from alternative paradigms has not given a satisfactory explanation of the way information in systems operates.,,,
    http://www.worldscientific.com.....08728_0008

    Here is a fairly recent video by Dr. Giem, that gets the main points of Dr. McIntosh’s paper over very well for the lay person:

    Biological Information – Information and Thermodynamics in Living Systems 11-22-2014 by Paul Giem (A. McIntosh) – video
    https://www.youtube.com/watch?v=IR_r6mFdwQM

    Of supplemental note: On top of classical information, ‘quantum information’ is now found in the cell.

    First, it is important to learn that ‘non-local’, beyond space and time, quantum entanglement (A. Aspect, A. Zeilinger, etc..) can be used as a ‘quantum information channel’,,,

    Quantum Entanglement and Information
    Quantum entanglement is a physical resource, like energy, associated with the peculiar nonclassical correlations that are possible between separated quantum systems. Entanglement can be measured, transformed, and purified. A pair of quantum systems in an entangled state can be used as a quantum information channel to perform computational and cryptographic tasks that are impossible for classical systems. The general study of the information-processing capabilities of quantum systems is the subject of quantum information theory.
    http://plato.stanford.edu/entries/qt-entangle/

    And this ‘quantum entanglement/information’ is now found in the cell on a massive scale in every DNA and protein molecule. Moreover this ‘quantum entanglement/information’ is implicated in performing quantum computation in the cell so as to provide resolutions to the unsolved problems of protein folding and DNA repair (notes and references given upon request).

    Quantum Information/Entanglement In DNA – short video
    https://vimeo.com/92405752

    Classical and Quantum Information Channels in Protein Chain – Dj. Koruga, A. Tomi?, Z. Ratkaj, L. Matija – 2006
    Abstract: Investigation of the properties of peptide plane in protein chain from both classical and quantum approach is presented. We calculated interatomic force constants for peptide plane and hydrogen bonds between peptide planes in protein chain. On the basis of force constants, displacements of each atom in peptide plane, and time of action we found that the value of the peptide plane action is close to the Planck constant. This indicates that peptide plane from the energy viewpoint possesses synergetic classical/quantum properties. Consideration of peptide planes in protein chain from information viewpoint also shows that protein chain possesses classical and quantum properties. So, it appears that protein chain behaves as a triple dual system: (1) structural – amino acids and peptide planes, (2) energy – classical and quantum state, and (3) information – classical and quantum coding. Based on experimental facts of protein chain, we proposed from the structure-energy-information viewpoint its synergetic code system.
    http://www.scientific.net/MSF.518.491

    That ‘non-local’ quantum entanglement, which conclusively demonstrates that ‘information’ in its pure ‘quantum form’ is completely transcendent of any time and space constraints (Bell, Aspect, Leggett, Zeilinger, etc..), should be found in molecular biology on such a massive scale, in every DNA and protein molecule, is a direct empirical falsification of Darwinian claims, for how can the ‘non-local’ quantum entanglement ‘effect’ in biology possibly be explained by a material (matter/energy) cause when the quantum entanglement effect falsified material particles as its own causation in the first place? Appealing to the probability of various ‘random’ configurations of material particles, as Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply!

    Looking beyond space and time to cope with quantum theory – 29 October 2012
    Excerpt: “Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them,”
    http://www.quantumlah.org/high.....uences.php

  38. 38
    bornagain77 says:

    In other words, to give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various ‘special’ configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place!
    Of related interest, classical information is found to be a subset of ‘non-local’ (i.e. beyond space and time) quantum entanglement/information by the following method:

    Quantum knowledge cools computers: New understanding of entropy – June 2011
    Excerpt: No heat, even a cooling effect;
    In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.”
    http://www.sciencedaily.com/re.....134300.htm

    Vlatko Vedral – Entanglement and its relationship to thermodynamics – QuICC Lecture 1
    https://www.youtube.com/watch?v=sBBxIa2CK6o
    Vlatko Vedral – Entanglement and its relationship to thermodynamics – QuICC Lecture 2
    https://www.youtube.com/watch?v=wNpD5tjs0Cs
    Vlatko Vedral – Entanglement and its relationship to thermodynamics – QuICC Lecture 3
    https://www.youtube.com/watch?v=t5PCYhlXLHA

    ,,,And here is the evidence that quantum information is in fact ‘conserved’;,,,

    Quantum no-hiding theorem experimentally confirmed for first time
    Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment.
    http://www.physorg.com/news/20.....tally.html

    Quantum no-deleting theorem
    Excerpt: A stronger version of the no-cloning theorem and the no-deleting theorem provide permanence to quantum information. To create a copy one must import the information from some part of the universe and to delete a state one needs to export it to another part of the universe where it will continue to exist.
    http://en.wikipedia.org/wiki/Q.....onsequence

    Black holes don’t erase information, scientists say – April 2, 2015
    Excerpt: The “information loss paradox” in black holes—a problem that has plagued physics for nearly 40 years—may not exist.,,,
    This is an important discovery, Stojkovic says, because even physicists who believed information was not lost in black holes have struggled to show, mathematically, how this happens. His new paper presents explicit calculations demonstrating how information is preserved, he says.
    The research marks a significant step toward solving the “information loss paradox,” a problem that has plagued physics for almost 40 years, since Stephen Hawking first proposed that black holes could radiate energy and evaporate over time. This posed a huge problem for the field of physics because it meant that information inside a black hole could be permanently lost when the black hole disappeared—a violation of quantum mechanics, which states that information must be conserved.
    http://phys.org/news/2015-04-b.....sts.html+/

    Information Conservation and the Unitarity of Quantum Mechanics
    Excerpt: “In more technical terms, information conservation is related to the unitarity of quantum mechanics. In this article, I will explain what unitarity is and how it’s related to information conservation.”
    http://youngsubyoon.com/QMunitarity.htm

  39. 39
    bornagain77 says:

    Besides providing direct empirical falsification of neo-Darwinian claims as to information being ’emergent’ from a material basis, the implication of finding ‘non-local’, beyond space and time, and ‘conserved’ quantum information in molecular biology on a massive scale is fairly, and pleasantly, obvious:

    Does Quantum Biology Support A Quantum Soul? – Stuart Hameroff – video (notes in description)
    http://vimeo.com/29895068

    Quantum Entangled Consciousness – Life After Death – Stuart Hameroff – video
    https://vimeo.com/39982578

    Verse and Music:

    John 1:1-4
    In the beginning was the Word, and the Word was with God, and the Word was God. He was in the beginning with God. All things were made through Him, and without Him nothing was made that was made. In Him was life, and the life was the light of men.

    The Word Is Alive – Casting Crowns
    https://www.youtube.com/watch?v=8rRF1FQed7c

  40. 40
    scordova says:

    KF and Niwrad,

    Old comrades, sorry we disagree. Our difference on the question of 2LOT are obviously irreconcilable.

    Apologies for any harsh words. Peace be with you, and God bless.

    Sal

  41. 41
    Germanicus says:

    bornagain77 at #37,

    usually I have no time to check your list of references (too long, mainly out of topic, etc. ) but your announcement of an experiment that demonstrates that information can violate (also if only “locally”, as you added cautiously) the SLOT was so stunning that I looked more in detail the paper.
    So, I was really disappointed to read in the continuation of the article that you omitted:

    “The experiment did not violate the second law of thermodynamics because energy was consumed in the experiment by the apparatus used, and by the experimenters themselves, who did work in monitoring the particle and adjusting the voltage, … ”

    So, no breaking news, the old SLOT was not violated.

  42. 42
    kairosfocus says:

    SalC:

    I have shown you the general framework that grounds 2LOT in stat mech, a framework that is generally uncontroversial as a first reasonable approach.

    It is therefore time for you to acknowledge that rather than cover with a Wilson Arte of Rhetorique blanket of silence then pivot to the next objection.

    The further objection also fails because the first relevant context is Darwin’s pond and the like.

    Indeed, thermodynamically/Chemical Kinetics wise, there is a big problem getting to having the relevant monomers for life and sustaining any significant conc. That goes all the way back to Thaxton et al and likely beyond.

    For OOBP, we have encapsulation and we have a situation where to cross the sea of non-function, there can be no favourable selection pressure as there is no fitness to have a rising slope on . . . indeed, that is a key part of the OOBP problem. Remember, 10 – 100+ mn base pairs to account for. Dozens of times over.

    Where, it is known from the chaining chemistry for both proteins and D/RNA that there is no significant chemical sequencing bias in the chains. And if there were, it would reduce info carrying capacity for D/RNA and functional chaining for proteins. To see why consider H2O, which is polar.

    Once we drain away thermal agitation sufficiently for the polarisation to have an effect, we form definite crystalline patterns in a definite lattice imposed by the bias of the geometry and the polarisation.

    This of course brings out the point that the structuring and ordering come from a flow coupled to something that gives a pattern Here, locked into the molecular structure and relatively inflexible.

    Constructors controlled by prescriptive information are able to impart organisation note merely order; but obviously require things to be sufficiently in control thermally — overheat an organism and it cooks; degrading the complex molecules.

    Even a high fever is dangerous.

    KF

    PS: I also must remind you that when someone like L K Nash uses coin flipping as a useful first model to capture the essential point of stat mech [i.e. a 2-state base element], that should give you pause before rushing off to dismissive language about equivocations. The logic involved is also quite general, it is a logic of states and statistics with onward links into the informational perspective that you patently are unwilling to address. I strongly suggest going to a library and pulling Nash. If it has Harry Robertson’s Statistical Thermophysics [Prentice], try that too.

  43. 43
    bornagain77 says:

    Germanicus, I certainly did not mean to imply in any way, shape, or fashion, that the second law was violated in the overall sense. That is precisely why I added ‘locally’ with scare quotes. Even your quote makes clear that the second law was violated ‘locally’ by saying energy was consumed elsewhere in the experiment.,,,

    “energy was consumed in the experiment by the apparatus used, and by the experimenters themselves, who did work in monitoring the particle and adjusting the voltage, … ”

    The main point I was trying to make clear is that information was shown to have a ‘thermodynamic content’. Seeing as Darwinists/Materialists have denied the physical reality of information through the years, that IS NOT a minor development!

    “In a paper published in Nature Physics they describe how they coaxed a Brownian particle to travel upwards on a “spiral-staircase-like” potential energy created by an electric field solely on the basis of information on its location. As the particle traveled up the staircase it gained energy from moving to an area of higher potential, and the team was able to measure precisely how much energy had been converted from information.

    and,,,

    “This is a beautiful experimental demonstration that information has a thermodynamic content,” says Christopher Jarzynski, a statistical chemist at the University of Maryland in College Park. In 1997, Jarzynski formulated an equation to define the amount of energy that could theoretically be converted from a unit of information2; the work by Sano and his team has now confirmed this equation. “This tells us something new about how the laws of thermodynamics work on the microscopic scale,” says Jarzynski.”

    once again, showing that information has a thermodynamic content IS NOT a minor consideration seeing as Darwinists/Materialists deny the physical reality of information and consider it ’emergent’ from a material basis.

    I’m sorry what I wrote was not more clear to you as to that important ‘information is real’ point I was making. I repeated that important point several times in my post, and even gave evidence for non-local, beyond space and time, ‘quantum information’ in the cell! Which is another point that is also certainly far from a minor consideration, since it directly falsifies materialistic claims that information can be reduced to a material basis.

    None-the-less, I will add a more nuanced caveat than the violated ‘locally’ with scare quotes that I used so as to avoid any confusion in the future.

    Supplemental notes: The experiment also concretely verified that they are not just whistling Dixie in the following,,,

    “Is there a real connection between entropy in physics and the entropy of information? ….The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental…”
    Siegfried, Dallas Morning News, 5/14/90, [Quotes Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin]

    Biophysics – Information theory. Relation between information and entropy: – Setlow-Pollard, Ed. Addison Wesley
    Excerpt: Linschitz gave the figure 9.3 x 10^12 cal/deg or 9.3 x 10^12 x 4.2 joules/deg for the entropy of a bacterial cell. Using the relation H = S/(k In 2), we find that the information content is 4 x 10^12 bits. Morowitz’ deduction from the work of Bayne-Jones and Rhees gives the lower value of 5.6 x 10^11 bits, which is still in the neighborhood of 10^12 bits. Thus two quite different approaches give rather concordant figures.
    https://docs.google.com/document/d/18hO1bteXTPOqQtd2H12PI5wFFoTjwg8uBAU5N0nEQIE/edit

    “a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.”
    – R. C. Wysong

    ‘The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica.”
    Carl Sagan, “Life” in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894

    Also of note:

    “It is CSI that enables Maxwell’s demon to outsmart a thermodynamic system tending toward thermal equilibrium”
    William Dembki Intelligent Design, pg. 159

    MOVING ‘FAR FROM EQUILIBRIUM’ IN A PREBIOTIC ENVIRONMENT: The role of Maxwell’s Demon in life origin – DAVID L. ABEL
    Abstract: Can we falsify the following null hypothesis?
    “A kinetic energy potential cannot be generated by Maxwell’s Demon from an ideal gas equilibrium without purposeful choices of when to open and close the partition’s trap door.”
    If we can falsify this null hypothesis with an observable naturalistic mechanism, we have moved a long way towards modeling the spontaneous molecular evolution of life. Falsification is essential to discount teleology. But life requires a particular version of “far from equilibrium” that explains formal organization, not just physicodynamic self-ordering as seen in Prigogine’s dissipative structures. Life is controlled and regulated, not just constrained. Life follows arbitrary rules of behavior, not just invariant physical laws. To explain life’s origin and regulation naturalistically, we must first explain the more fundamental question, “How can hotter, faster moving, ideal gas molecules be dichotomized from cooler, slower moving, ideal gas molecules without the Demon’s choice contingency operating the trap door?”
    https://www.academia.edu/9963341/MOVING_FAR_FROM_EQUILIBRIUM_IN_A_PREBIOTIC_ENVIRONMENT_The_role_of_Maxwell_s_Demon_in_life_origin

  44. 44
    Germanicus says:

    bornagain77 at #43,

    to be clear: SLOT in that experiment is not violated at all, either “locally” or “in overall sense”. Indeed SLOT is about the compensation argument.

  45. 45
    kairosfocus says:

    F/N: I have added an excerpted derivation of 2LOT from statistical-molecular considerations to the OP, using the non equiprobable distribution case for S tracing to Gibbs. The original work to make this general connexion is over 100 years old. I trust, this will allow us to focus on the real issue, that “compensation” must be relevant. KF

    PS: I also added my comment and cite from Yavorski-Pinski, on the black-white balls diffusion model.

  46. 46
    kairosfocus says:

    Germanicus & BA77, you seem to be discussing a case of relevant compensation. KF

  47. 47
    bornagain77 says:

    Germanicus, you are (purposely?) missing the bigger point. I am well aware that the 2nd law was not violated in the overall sense and that the energy was payed for elsewhere in the experiment. I did not think for a moment that it was not. Again, I am sorry for any fault I may have had in your confusion.

    My main point is, and always has been, that physically real information was imparted to a system by intelligence to ‘locally’ violate the second law of the isolated system being worked on.

    As I pointed out before, the materialistic presupposition is that information is ’emergent’ from a matter-energy basis and is thus not ‘physically real’.

    In fact, also as I pointed out before, I have had more than one debate with atheists on UD who had claimed the information in the cell was merely illusory, i.e merely a ‘metaphor’.

    To have an experimental demonstration that information has a ‘thermodynamic content’ and is thus physically real is a direct falsification of that materialistic presupposition.

    Moreover, many Darwinists continually gripe that IDists have no ‘mechanism’ to appeal to explain the origination of information in cells. Yet here we have a direct demonstration that intelligence can ‘purposely’ impart physically real information into an isolated system whilst paying for the second law elsewhere outside that isolated system. i.e. Maxwell’s demon!

    Moreover, I also previously highlighted that non-local quantum information falsified neo-Darwinism more directly than even this experiment did.

    This direct falsification of a foundational neo-Darwinian claim by non-local quantum information might not seem like that big of a deal for you, but for me personally, the finding of ‘non-local’ quantum entanglement/information in molecular biology is a direct empirical falsification of the foundational neo-Darwinian claim that information can ’emerge’ from a material basis, and is thus, of no small importance as far as empirical science itself is concerned.

  48. 48
    bornagain77 says:

    Here is a semi-related point of interest on the second law and consciousness:

    Quantum Zeno effect
    “It has been experimentally confirmed,, that unstable particles will not decay, or will decay less rapidly, if they are observed. Somehow, observation changes the quantum system. We’re talking pure observation, not interacting with the system in any way.”
    Douglas Ell – Counting to God – pg. 189 – 2014 – Douglas Ell graduated early from MIT, where he double majored in math and physics. He then obtained a masters in theoretical mathematics from the University of Maryland. After graduating from law school, magna cum laude, he became a prominent attorney.

    Quantum Zeno Effect
    The quantum Zeno effect is,, an unstable particle, if observed continuously, will never decay.
    http://en.wikipedia.org/wiki/Quantum_Zeno_effect

    The reason why I am very impressed with the Quantum Zeno effect as to establishing consciousness’s primacy in quantum mechanics is, for one thing, that Entropy is, by a wide margin, the most finely tuned of initial conditions of the Big Bang:

    The Physics of the Small and Large: What is the Bridge Between Them? Roger Penrose
    Excerpt: “The time-asymmetry is fundamentally connected to with the Second Law of Thermodynamics: indeed, the extraordinarily special nature (to a greater precision than about 1 in 10^10^123, in terms of phase-space volume) can be identified as the “source” of the Second Law (Entropy).”

    How special was the big bang? – Roger Penrose
    Excerpt: This now tells us how precise the Creator’s aim must have been: namely to an accuracy of one part in 10^10^123.
    (from the Emperor’s New Mind, Penrose, pp 339-345 – 1989)

    For another thing, it is interesting to note just how foundational entropy is in its explanatory power for actions within the space-time of the universe:

    Shining Light on Dark Energy – October 21, 2012
    Excerpt: It (Entropy) explains time; it explains every possible action in the universe;,,
    Even gravity, Vedral argued, can be expressed as a consequence of the law of entropy. ,,,
    The principles of thermodynamics are at their roots all to do with information theory. Information theory is simply an embodiment of how we interact with the universe —,,,
    http://crev.info/2012/10/shini.....rk-energy/

    In fact, entropy is also the primary reason why our physical, temporal, bodies grow old and die,,,

    Aging Process – 85 years in 40 seconds – video
    http://www.youtube.com/watch?v=A91Fwf_sMhk

    *3 new mutations every time a cell divides in your body
    * Average cell of 15 year old has up to 6000 mutations
    *Average cell of 60 year old has 40,000 mutations
    Reproductive cells are ‘designed’ so that, early on in development, they are ‘set aside’ and thus they do not accumulate mutations as the rest of the cells of our bodies do. Regardless of this protective barrier against the accumulation of slightly detrimental mutations still we find that,,,
    *60-175 mutations are passed on to each new generation.
    Per John Sanford

    Entropy Explains Aging, Genetic Determinism Explains Longevity, and Undefined Terminology Explains Misunderstanding Both – 2007
    Excerpt: There is a huge body of knowledge supporting the belief that age changes are characterized by increasing entropy, which results in the random loss of molecular fidelity, and accumulates to slowly overwhelm maintenance systems [1–4].,,,
    http://www.plosgenetics.org/ar.....en.0030220

    And yet, to repeat,,,

    Quantum Zeno effect
    “It has been experimentally confirmed,, that unstable particles will not decay, or will decay less rapidly, if they are observed. Somehow, observation changes the quantum system. We’re talking pure observation, not interacting with the system in any way.”
    Douglas Ell – Counting to God – pg. 189 – 2014

    Quantum Zeno effect
    Excerpt: The quantum Zeno effect is,,, an unstable particle, if observed continuously, will never decay.
    per wiki

    This is just fascinating! Why in blue blazes should conscious observation put a freeze on entropic decay, unless consciousness was/is more foundational to reality than the 1 in 10^10^120 entropy is?

  49. 49
    kairosfocus says:

    BA77:

    Thanks again, Abel 2012 is good, in MOVING ‘FAR FROM EQUILIBRIUM’ IN A PREBIOTIC ENVIRONMENT: The role of Maxwell’s Demon in life origin :

    Mere heterogeneity and/or order do not even begin to satisfy the necessary and sufficient conditions for life. Self-ordering tendencies provide no mechanism for self-organization, let alone abiogenesis. All sorts of physical astronomical “clumping,” weak-bonded molecular alignments, phase changes, and outright chemical reactions occur spontaneously in nature that have nothing to do with life. Life is organization-based, not order-based. As we shall see below in Section 6, order is poisonous to organization.

    Stochastic ensembles of nucleotides and amino acids can polymerize naturalistically (with great difficulty). But functional sequencing of those monomers cannot be determined by any fixed physicodynamic law. It is well-known that only one 150-mer polyamino acid string out of 10^74 stochastic ensembles folds into a tertiary structure with any hint of protein function (Axe, 2004). This takes into full consideration the much publicized substitutability of amino acids without loss of function within a typical protein family membership. The odds are still only one functional protein out of 10^74 stochastic ensembles. And 150 residues are of minimal length to qualify for protein status. Worse yet, spontaneously condensed Levo-only peptides with peptide-only bonds between only biologically useful amino acids in a prebioitic environment would rarely exceed a dozen mers in length. Without polycodon prescription and sophisticated ribosome machinery, not even polypeptides form that would contribute much to “useful biological work.” . . . .

    There are other reasons why merely “moving far from equilibrium” is not the key to life as seems so universally supposed. Disequilibrium stemming from mere physicodynamic constraints and self-ordering phenomena would actually be poisonous to life-origin (Abel, 2009b). The price of such constrained and self-ordering tendencies in nature is the severe reduction of Shannon informational uncertainty in any physical medium (Abel, 2008b, 2010a). Self-ordering processes preclude information generation because they force conformity and reduce freedom of selection. If information needs anything, it is the uncertainty made possible by freedom from determinism at true decisions nodes and logic gates. Configurable switch-settings must be physicodynamically inert (Rocha, 2001; Rocha & Hordijk, 2005) for genetic programming and evolution of the symbol system to take place (Pattee, 1995a, 1995b). This is the main reason that Maxwell’s Demon model must use ideal gas molecules. It is the only way to maintain high uncertainty and freedom from low informational physicochemical determinism. Only then is the control and regulation so desperately needed for organization and life-origin possible. The higher the combinatorial possibilities and epistemological uncertainty of any physical medium, the greater is the information recordation potential of that matrix.

    Constraints and law-like behavior only reduce uncertainty (bit content) of any physical matrix. Any self-ordering tendency precludes the freedom from law needed to program logic gates and configurable switch settings. The regulation of life requires not only true decision nodes, but wise choices at each decision node. This is exactly what Maxwell’s Demon does. No yet-to-be discovered physicodynamic law will ever be able to replace the Demon’s wise choices, or explain the exquisite linear digital PI programming and organization of life (Abel, 2009a; Abel & Trevors, 2007). Organization requires choice contingency rather than chance contingency or law (Abel, 2008b, 2009b, 2010a). This conclusion comes via deductive logical necessity and clear-cut category differences, not just from best-thus-far empiricism or induction/abduction.

  50. 50
    kairosfocus says:

    F/N: On demand, it has been clearly shown that 2LOT is rooted in statistical thermodynamics, that this connects to the atomic-molecular basis of matter, that this points to information and probability issues, and that the origin of FSCO/I on blind needle in haystack search chance and mechanical necessity are maximally implausible as a direct consequence.

    Let me cite Mandl in Statistical Physics, from the Manchester series, my actual first text in Stat Mech, p. 32:

    It was the superb achievement of Boltzmann, in the 1870’s, to relate entropy, which is a macroscopic concept, to the molecular properties of a system. The basic idea is that the macroscopic specification of a system is very imperfect. A system in a given macroscopic state can still be in any one of an enormously large number of microscopic states: the coarse macroscopic description cannot distinguish between these. The microscopic state of a system changes all the time; for example, in a gas it changes due to collisions between molecules. But the number of microscopic states which correspond to macroscopic equilibrium is overwhelmingly large compared with all other microscopic states. Hence the probability of appreciable deviations from equilibrium occurring is utterly negligible.

    [Contrasting gas molecules filling a container spontaneously rushing to one half, vs free expansion to fill it if a partition were suddenly removed] it is utterly improbable that the gas, starting from a state of uniform density . . . should spontaneously change to a state where all the molecules are in one half (A) of the enclosure . . . Of course one can especially prepare the system to be in this state [by using a partition that forces the molecuses to be in A and B to be empty] . . . On removing the partition (if we imagine that this can be done sufficiently quickly) . . . the gas will very rapidly expand to fill the whole available space . . . Thereafter fluctuations will be very small [for 10^20 molecules, typ. up to ~ 10^10]. One would have to wait for times enormously long compared with the age of the universe [~10^17 s is in view] for a fluctuation to occur which is large on the macroscopic scale and then it would last only a very small fraction of a second. Thus one may safely ignore such large fluctuations altogether [i.e., they are practically unobservable though abstractly possible]. [Mandl, F, Statistical Physics, (Chichester: John Wiley, 1971), p. 32.]

    Remember, that’s just for simple spontaneous occurrence of order not shaped by imposed constraints. The same, manifestly obtains with even more force for spontaneous functionally specific complex interactive organisation and associated information, FSCO/I for short.

    Nor will it do to dismiss my point with an epithet like: Hoyle’s Fallacy. (Not least, as a Nobel equivalent prize holding astrophysicist, Sir Fred Hoyle had to be expert in thermodynamics and linked subjects.)

    Long before we come to a jumbo jet formed by a tornado hitting a junkyard, we have forming just an instrument on its instrument panel. Or, forming a Penn International 50 lb class reel. And going back to micro scale, forming a functional 300 AA protein — a typical size — or a D/RNA string that codes for it would also tax the atomic-temporal resources of the observed cosmos.

    Such things can patently be formed without doing violence to 2LOT, but the only empirically and analytically warranted way, is for there to be energy, mass and information flows that are relevant by virtue of being closely coupled to energy converters that drive constructors that use prescriptive information to create the FSCO/I rich entities. In so doing, they will degrade and dissipate energy, and usually will exhaust waste matter also.

    Jumbo jets and fishing reels have factories and proteins have ribosomes etc.

    Mother Nature is trying to tell us something — FSCO/I is a strong sign of design as adequate cause, but are we listening?

    KF

    PS: That brings us back to Abel’s point highlighted yesterday:

    Mere heterogeneity and/or order do not even begin to satisfy the necessary and sufficient conditions for life. Self-ordering tendencies provide no mechanism for self-organization, let alone abiogenesis. All sorts of physical astronomical “clumping,” weak-bonded molecular alignments, phase changes, and outright chemical reactions occur spontaneously in nature that have nothing to do with life. Life is organization-based, not order-based. As we shall see below in Section 6, order is poisonous to organization.

    Stochastic ensembles of nucleotides and amino acids can polymerize naturalistically (with great difficulty). But functional sequencing of those monomers cannot be determined by any fixed physicodynamic law. It is well-known that only one 150-mer polyamino acid string out of 10^74 stochastic ensembles folds into a tertiary structure with any hint of protein function (Axe, 2004). This takes into full consideration the much publicized substitutability of amino acids without loss of function within a typical protein family membership. The odds are still only one functional protein out of 10^74 stochastic ensembles. And 150 residues are of minimal length to qualify for protein status. Worse yet, spontaneously condensed Levo-only peptides with peptide-only bonds between only biologically useful amino acids in a prebioitic environment would rarely exceed a dozen mers in length. Without polycodon prescription and sophisticated ribosome machinery, not even polypeptides form that would contribute much to “useful biological work.” . . . .

    There are other reasons why merely “moving far from equilibrium” is not the key to life as seems so universally supposed. Disequilibrium stemming from mere physicodynamic constraints and self-ordering phenomena would actually be poisonous to life-origin (Abel, 2009b). The price of such constrained and self-ordering tendencies in nature is the severe reduction of Shannon informational uncertainty in any physical medium (Abel, 2008b, 2010a). Self-ordering processes preclude information generation because they force conformity and reduce freedom of selection. If information needs anything, it is the uncertainty made possible by freedom from determinism at true decisions nodes and logic gates. Configurable switch-settings must be physicodynamically inert (Rocha, 2001; Rocha & Hordijk, 2005) for genetic programming and evolution of the symbol system to take place (Pattee, 1995a, 1995b). This is the main reason that Maxwell’s Demon model must use ideal gas molecules. It is the only way to maintain high uncertainty and freedom from low informational physicochemical determinism. Only then is the control and regulation so desperately needed for organization and life-origin possible. The higher the combinatorial possibilities and epistemological uncertainty of any physical medium, the greater is the information recordation potential of that matrix.

    Constraints and law-like behavior only reduce uncertainty (bit content) of any physical matrix. Any self-ordering tendency precludes the freedom from law needed to program logic gates and configurable switch settings. The regulation of life requires not only true decision nodes, but wise choices at each decision node. This is exactly what Maxwell’s Demon does. No yet-to-be discovered physicodynamic law will ever be able to replace the Demon’s wise choices, or explain the exquisite linear digital PI programming and organization of life (Abel, 2009a; Abel & Trevors, 2007). Organization requires choice contingency rather than chance contingency or law (Abel, 2008b, 2009b, 2010a). This conclusion comes via deductive logical necessity and clear-cut category differences, not just from best-thus-far empiricism or induction/abduction. [In the book chapter, MOVING ‘FAR FROM EQUILIBRIUM’ IN A PREBIOTIC ENVIRONMENT: The role of Maxwell’s Demon in life origin, 2012]

    . . . and to Sewell’s longstanding point:

    . . . The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur.

    The discovery that life on Earth developed through evolutionary “steps,” coupled with the observation that mutations and natural selection — like other natural forces — can cause (minor) change, is widely accepted in the scientific world as proof that natural selection — alone among all natural forces — can create order out of disorder, and even design human brains, with human consciousness. Only the layman seems to see the problem with this logic. In a recent Mathematical Intelligencer article [“A Mathematician’s View of Evolution,” The Mathematical Intelligencer 22, number 4, 5-7, 2000] I asserted that the idea that the four fundamental forces of physics alone could rearrange the fundamental particles of Nature into spaceships, nuclear power plants, and computers, connected to laser printers, CRTs, keyboards and the Internet, appears to violate the second law of thermodynamics in a spectacular way.1 . . . .

    What happens in a[n isolated] system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. As I wrote in “Can ANYTHING Happen in an Open System?”, “order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door…. If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth’s atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here.” Evolution is a movie running backward, that is what makes it special.

    THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn’t, that atoms would rearrange themselves into spaceships and computers and TV sets . . . [NB: Emphases added. I have also substituted in isolated system terminology as GS uses a different terminology.]

    . . . but, are we listening?

  51. 51
    kairosfocus says:

    F/N 2: In case someone thinks the 500 or 1,000 coin example is irrelevant to Stat Mech, or not credible, simply put it in the context of a paramagnetic substance with atoms — or for argument tiny magnets — aligned or anti-parallel with an imposed WEAK magnetic field; cf Mandl, Sec 2.2 pp 35 ff. That is, the weak field gives an axis or orientation and the system state is defined on alignment; weak being so relative to typical thermal energy lumps of order kT so thermal agitation can readily flip the atomic alignments . . . think, tiny loops of current here that can be ccw up or ccw down [ = N up or N down]. The same binomial distribution will at once obtain, with extrema showing total alignment [N up one way, S up the other] and the middle zone the typical no apparent net magnetic moment case. The analysis suggests that with some thermal agitation that can cause flipping, the natural thermodynamic equilibrium state — where it strongly tends to settle once there is reasonable freedom for atoms to flip [see, too, how the idea of equiprobable individual micro states so naturally fits in with a dominant equilibrium cluster . . . ] will be in the bulk cluster near 50-50, with the fluctuations probably within 1 – 2 SD of the 50-50 mean or so, where sd = sqrt n.p.q, with p = q = 0.5, so for 500 fluctuation is ~ 5.6 – 11, and for 1,000 ~ 7.9 – 16, notice how the percentage falls: 2.2 –> 1.6. For a more typical 10^20 atoms, fluctuations are of order 10^10 atoms, a vanishingly small percentage; with N & S up also overwhelmingly in no particular informed order of course. And of course, N/S up is directly applicable to information storage too, though for practical cases we are speaking of domains not single atoms. KF

  52. 52
    Mung says:

    Mung: Chance is not a cause.

    Salvador: Mung minces words.

    Not really. Mung is just better read.

    Not A Chance

  53. 53
    Mung says:

    Salvador:

    If you came across a table on which was set 500 fair coins and 100% displayed the “heads” side of the coin, how would you, using 2LOT, test “chance” as a hypothesis to explain this particular configuration of coins?

    Should I just take it on faith that they are 500 “fair” coins?

    You’re so invested in your nonsensical objections to ID that you can’t even see the idiocy of your position.

    How shall we test the proposition that each of these 500 coins is a “fair” coin?

  54. 54
    Mung says:

    Salvador:

    Pardon, you give a 4,458 word OP and yet are unwilling to state the actual version of the 2nd Law you are working from.

    LoL! What more could Salvador say to express just how lost he is?

  55. 55
    Mung says:

    Let’s continue with Henry B. Callen:

    Postulate I. There exist particular states (called equilibrium states) of simple systems that, macroscopically, are characterized completely by the internal energy U, the volume V, and the mole numbers N1, N2,…,Nr of the chemical components.

    The question for Salvador is simple.

    Which formulation (Sal: version) of the second law does this postulate violate?

    Are there equilibrium states of thermodynamic systems that this postulate fails to recognize?

  56. 56
    Mung says:

    Since we’re on a roll:

    Postulate II. There exists a function (called the entropy S) of the extensive parameters of any composite system, defined for all equilibrium states and having the following property: The values assumed by the extensive parameters in the absence of an internal constraint are those that maximize the entropy over the manifold of constrained equilibrium states.

    – H.B. Callen. Thermodynamics

    Note the failure to mention any specific “version” of the second law. Is it therefore deficient in some way?

    Not applicable to thermodynamics? How so?

    Folks, this is from 1960. We’re sharing what has been known for decades.

  57. 57
    Mung says:

    Further:

    Postulate III. The entropy of a composite system is additive over the constituent subsystems. The entropy is continuous and differentiable and is a monotonicaly increasing function of the energy.

    Shades of Shannon/Weaver.

  58. 58
    Mung says:

    kairosfocus:

    PS: I again note that in L K Nash’s classical introduction to statistical thermodynamics, the example of a 500 or 1,000 or so coin system is used as the key introductory example.

    “Statistical analysis shows that the emergence of a predominant configuration is characteristic of any assembly with a large number (N) of units, and that the peaking already observable with a 10-5 assembly becomes ever more pronounced as N increases. We can easily bring out the essence of this analysis by considering the results obtained by repeatedly tossing a well-balanced coin, which can fall in only two ways: head (H) or tails (T).”

    – p. 10

    Not even going to get into the exercise on p. 124.

  59. 59
    scordova says:

    Mung: Chance is not a cause

    Does that mean chance can’t be an explanation for something?

  60. 60
    kairosfocus says:

    SalC (attn Mung):

    Did you follow LKN’s actual analysis (not to mention adaptation of Mandl’s paramagnetic model . . . he illustrated with 9 units, referred 10^20 to 23 as more realistic)?

    I am getting lazy, want to copy-paste not type out. But, here’s LKN, p. 26 2nd edn, on the pattern that I highlighted above from Yavorski-Pinski:

    As a mundane illustration, imagine a box in which 1,000 distinguishable coins [–> extend to the paramagnetic model if you do not like coins . . . ] all lie heads up. This perfectly ordered configuration can arise in but one way. Suppose that, after shaking the box slightly, we find its coins in the less orderly 900H/100T configuration, which can arise in some 10^140 ways. Once the box has been shaken, this configuration is favored by a margin of 10^140:1 over the initial perfectly ordered configuration. Suppose then that, after further shaking, we find the contents of the box in the still less ordered 700H/300T configuration, which can arise in some 10^265 different ways. The appearance of this configuration is favored by a margin of 10^125:1 over retention of the intermediate 900H/100T configuration, and by a margin of 10^265:1 over the reappearance of the initial completely ordered configuration. Suppose finally that the box is subjected to prolonged and very vigorous shaking. We are then likely to find the contents of the box in some close approximation tot he completely random 500H/500T configuration, which can arise in some 10^300 different ways. And now still further shaking is unlikely to produce any significant departure from the wholly disordered PG [= predominant group] configurations in which W has at last assumed its maximum value. This purely pedestrian illustration [–> remember the paramagnetic substance model] thus shows how a progressive increase in “disorder” necessarily accompanies an approach to equilibrium [ –> moves to PG, stays there within several SD’s thereafter . . . a yardstick for fluctuations” . . . as an overwhelming likelihood] characterized by the assumption of configurations with ever increasing values of W. And what may appear at first to be a purposeful “drive,” towards states of maximal disorder, can now be seen to arise from the operation of blind chance in an assembly where all microstates remain equally probable [–> a postulate used in modelling reality and justified by success, but the effect will largely remain if there is simply a reasonably random distribution of probabilites i.e. so long as there is not something that locks in a far from equilibrium condition], but where an overwhelming proportion of all microstates is associated with the maximally disordered [–> least specified or restricted] PG configurations.

    Of course, LKN uses the order/disorder contrast that is typically deprecated by current personalities, but his point in context is reasonable and readily understood. Where if one does not like a discussion on in effect microscopic coins, a paramagnetic substance of 1000 atoms will do . . . maybe a cryogenic array.

    His point comes out clearly, drawing out the tendency, once nothing blocks it, for configurations at micro level to strongly trend towards a cluster of microstates close to the peak of the binomial distribution; maximising W in S = k ln W. This draws out the same trend that is then exhibited as underlying the 2LOT, and as the appendix to the OP shows, we can derive the numerical form from it.

    In short, the macro-level 2LOT rests on an underlying microstates view, and on interactions and changes driven by implications of molecular level randomness. As a further consequence we see how interactions between subsystems of an overall system that may be isolated, will be such that transfer of heat from A to B will net yield an overall increase in the number of ways mass and energy may be distributed at micro-level as the number of ways added for B plausibly exceeds the number lost by A on transfer of an increment d’Q.

    The next point is, that the same micro, molecular scale picture now brings to bear the force of the last part of the Clausius statement . . . and recall, there are multiple statements that significantly overlap . . . you have repeatedly cited (but seem to have overlooked):

    Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.

    In short, “compensation” is observed to be characteristically relevant, rather than irrelevant.

    And more broadly, on the same underlying molecular picture, we can see why it is futile to expect the sort of huge “fluctuation” implied by alleged spontaneous appearance of FSCO/I, other than where there is a cluster of constraints as highlighted in the OP as updated BTW:

    . . . starting from the “typical” diffused condition [ –> say, in a Darwin’s warm pond], we readily see how a work to clump at random emerges, and a further work to configure in functionally specific ways.

    With implications for this component of entropy change. [–> Entropy being a state function, we may reasonably break it up into components, where also such components may not be freely inter-convertible, e.g. work to clump without regard to configuration, then further work to configure per wiring diagram]

    As well as for the direction of the clumping and assembly process to get the right parts together, organised in the right cluster of ways that are consistent with function.

    Thus, there are implications of prescriptive information that specifies the relevant wiring diagram. (Think, AutoCAD etc as a comparison.)

    Pulling back, we can see that to achieve such, the reasonable — and empirically warranted — expectation, is

    a: to find energy, mass and information sources and flows associated with

    b: energy converters that provide shaft work or controlled flows [I use a heat engine here but energy converters are more general than that], linked to

    A heat Engine partially converts heat into work: constructors that carry out the particular work, under control of

    d: relevant prescriptive information that explicitly or implicitly regulates assembly to match the wiring diagram requisites of function,

    . . . [u/d Apr 13] or, comparing an contrasting a Maxwell Demon model that imposes organisation by choice with use of mechanisms, courtesy Abel:

    . . . also with

    e: exhaust or dissipation otherwise of degraded energy [typically, but not only, as heat . . . ] and discarding of wastes. (Which last gives relevant compensation where dS cosmos rises. Here, we may note SalC’s own recent cite on that law from Clausius, at 570 in the previous thread that shows what “relevant” implies: Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.)

    This contrasts sharply with the tendency to appeal to irrelevant flows without couplings. But the empirical evidence favours the above, as well as the underlying analysis on clusters of microstates.

    All of which is why I tend to speak on the statistical molecular processes undergirding and intertwined inextricably with 2LOT for 100+ years now. As well as that the tendency outlined is in effect the statistical form of 2LOT, which is a trivial extension from it.

    There is no suspect novelty tracing to Niw and the undersigned.

    The 500 or 1000 coin model directly connects and indeed it is arguable that the same thing is happening with macro level coins as with a paramagnetic substance, only we can see.

    And, without relevant and properly coupled energy, mass and information flows tied to energy converters and constructors controlled by prescriptive information, the molecular dynamics undergirding 2LOT are not reasonably compatible with origin of FSCO/I rich structures.

    Which, is where I came in way back.

    However, I do not expect such reasoning to be directly persuasive to those who have shown willingness to burn down the house of reason itself to sustain their evolutionary materialist ideology or accommodation to it. But, as more and more people wake up to what is going on, such will feel and fear social pressure occasioned by exposure to the point that their scheme is unreasonable and untenable.

    That is the point when actual dialogue may begin.

    Meanwhile, the issue is to make our case clear and solid.

    Thermodynamics reasoning, information reasoning and probability reasoning — which converge and interact profoundly — all have a part to play.

    KF

  61. 61
    kairosfocus says:

    SalC & Mung:

    I discussed the nature of chance (in different senses) here some time back:

    http://www.uncommondescent.com.....efinition/

    I would suggest that there are generally relevant causal factors which include chance, in both senses of the uncontrollable/unpredictable and the inherently stochastic quantum processes.

    Clipping:

    Chance:

    TYPE I: the clash of uncorrelated trains of events such as is seen when a dropped fair die hits a table etc and tumbles, settling to readings in the set {1, 2, . . . 6} in a pattern that is effectively flat random. In this sort of event, we often see manifestations of sensitive dependence on initial conditions, aka chaos, intersecting with uncontrolled or uncontrollable small variations yielding a result predictable in most cases only up to a statistical distribution which needs not be flat random.

    TYPE II: processes — especially quantum ones — that are evidently random, such as quantum tunnelling as is the explanation for phenomena of alpha decay. This is used in for instance zener noise sources that drive special counter circuits to give a random number source. Such are sometimes used in lotteries or the like, or presumably in making one time message pads used in decoding.

    This is what I then went on to say:

    (So, we first envision nature acting by low contingency mechanical necessity such as with F = m*a . . . think a heavy unsupported object near the earth’s surface falling with initial acceleration g = 9.8 N/kg or so. That is the first default. Similarly, we see high contingency knocking out the first default — under similar starting conditions, there is a broad range of possible outcomes. If things are highly contingent in this sense, the second default is: CHANCE. That is only knocked out if an aspect of an object, situation, or process etc. exhibits, simultaneously: (i) high contingency, (ii) tight specificity of configuration relative to possible configurations of the same bits and pieces, (iii) high complexity or information carrying capacity, usually beyond 500 – 1,000 bits. And for more context you may go back to the same first post, on the design inference. And yes, that will now also link this for an all in one go explanation of chance, so there!)

    Okie, let us trust there is sufficient clarity for further discussion on the main point. Remember, whatever meanings you may wish to inject into “chance,” the above is more or less what design thinkers mean when we use it — and I daresay, it is more or less what most people (including most scientists) mean by chance in light of experience with dice-using games, flipped coins, shuffled cards, lotteries, molecular agitation, Brownian motion and the like. At least, when hair-splitting debate points are not being made. It would be appreciated if that common sense based usage by design thinkers is taken into reckoning.

    I just note, that this is deeply embedded in the statistical mechanical picture of matter at molecular level.

    KF

    PS: That One sees chance processes at work does not preclude possibility of influence or intervention by purposeful actors as relevant. The same tumbling dice can be loaded or set to a desired reading.

  62. 62
    Mung says:

    Salvador no longer wishes to discuss thermodynamics, or information, or the law of large numbers, or probability.

    Perhaps a wise choice.

  63. 63
    kairosfocus says:

    Mung:

    Including, elsewhere?

    This thread and its OP serve to show the 100+ year inextricable connexion between the statistical-molecular view and 2LOT, thence how the molecular-state view shows why FSCO/I on blind needle in haystack search is maximally implausible.

    And, information is involved as the entropy of an entity can properly viewed as a metric of avg missing info to specify microstate on knowing the macro one.

    Probabilities and fluctuations crop up, pointing to why there is a strong trend to equilibrium and why it is sticky, tending to persist once you get there.

    Bonus, the coins example thanks to Mandl can be directly translated into studying a toy paramagnetic system.

    KF

    PS: Y/day I had a major reminder of why I do not wish to have anything to deal with the fever swamp, rage and hate fest, defamatory cyberstalker atheism advocates and their more genteel enablers. I wonder if such understand what they say about themselves when they carry on as they do when they think they can get away with it?

  64. 64
    Mung says:

    Wikipedia:

    In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.

    So we’re talking basic probability here, or something a bit more advanced.

    What’s the expected value for an ID argument that relies in the LLN?

    http://en.wikipedia.org/wiki/Expected_value

    Oh sure. Just basic probability and all. Boringly obvious.

    Trial: The painting on the ceiling of the Sistine Chapel.

    Wikipedia:

    The Sistine Chapel (Latin: Sacellum Sixtinum; Italian: Cappella Sistina) is a chapel in the Apostolic Palace, the official residence of the Pope, in Vatican City. Originally known as the Cappella Magna, the chapel takes its name from Pope Sixtus IV, who restored it between 1477 and 1480. Since that time, the chapel has served as a place of both religious and functionary papal activity. Today it is the site of the Papal conclave, the process by which a new pope is selected. The fame of the Sistine Chapel lies mainly in the frescos that decorate the interior, and most particularly the Sistine Chapel ceiling and The Last Judgment by Michelangelo.

    How many times shall we repeat this trial?

    What is the expected value?

    Should we conclude that it’s not designed?

  65. 65
    Mung says:

    kf, I have long been an advocate of the expression of thermodynamics in information theory terms. This view is founded upon statistical mechanics. As you say, it has long and respected history.

    Salvador retreats to individual formulations of the second law, but what on earth unifies these individual formulations, and why should ID theorists care if they fail to address “the Clausius formulation” of the 2LOT?

    An ID argument does not have to be formulated in long-outdated terms that have been subsumed under broader principles.

    I think we agree on this.

  66. 66
    kairosfocus says:

    Mung, for 100+ years, 2LOT has been inextricably rooted in molecular-statistical terms, and these terms can be tied very directly to metrics of average missing info to identify particular microstate on knowing only macro-observable state defining variables. The net result as say L K Nash long since pointed out, is that if a system is initially in far from “equilibrium” condition and only spontaneous forces typically available are at work, it will strongly tend to the predominant cluster of states and will tend to stay there; i.e. we have identified what thermodynamic equilibrium is and how it arises and gains its stability. This will obviously hold so long as there is reasonable access for particles and energy to interact and undertake a spontaneous walk away from initial state. It does not strictly require every microstate is equiprobable but that simplification makes the mathematics enormously more tractable. And, if coins are deemed unrealistic, Mandl has kindly given us an equivalent, a toy paramagnetic substance in a weak B field. But then, instantly, we can see that it is the same logic at work in both cases. Just, we can inspect coins directly by opening up the box so to speak. Yes, magnitudes of involved entropy numbers for such an aspect are low relative to those in many heat flow cases. So what, our interest is in likelihood of finding deeply isolated islands of function on blind needle in haystack search. On having complexity at least 500 – 1,000 bits, we find that FSCO/I is maximally implausible to be found on such blind search. Basic probability, info and thermodynamics converge on the same point. And, that brings us back to seeing, on the logic that grounds 2LOT and is inextricable from it, that we have a good analytical rationale for the empirical observation that FSCO/I is an empirically highly reliable sign of intelligently directed configuration as adequate cause. KF

    PS: The onlooker should be able to note from the studious silence of objectors who routinely appeal to irrelevant energy etc flows as “compensating” for origin of FSCO/I, that they do not have a serious response. The following version of the Clausius statement of 2LOT, is most highly and directly relevant:

    Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.

    As in, relevant to and involved in the same process. As also in: energy, mass and info flows, connected to energy converters that produce shaft work and/or ordered flows that with prescriptive info, couple to constructors that under cybernetic control produce FSCO/I based entities. The assembly of a fishing reel or a protein, alike, exemplify the pattern.

  67. 67
    Graham2 says:

    Jeeez, fishing reels and the shroud of Turin, all in the one thread. Im outa here.

  68. 68
    bornagain77 says:

    Of related note:

    Ian Juby recently interviewed Dr. Andrew McIntosh. Dr. Andrew McIntosh is an expert in Thermodynamics who explains the intractable problems for neo-Darwinism from thermodynamics in a very easy to understand manner for the lay person.

    Dr. Andrew McIntosh interview with Ian Juby – video
    https://www.youtube.com/watch?v=D2PZ23ufoIQ

  69. 69
    bornagain77 says:

    Graham2 I would say come back when you have a more open mind, but it appears your mind was so open in the past that you ended up ‘losing your mind’.

    “Hawking’s entire argument is built upon theism. He is, as Cornelius Van Til put it, like the child who must climb up onto his father’s lap into order to slap his face.
    Take that part about the “human mind” for example. Under atheism there is no such thing as a mind. There is no such thing as understanding and no such thing as truth. All (Stephen) Hawking is left with is a box, called a skull, which contains a bunch of molecules.
    Hawking needs God in order to deny Him.”
    – Cornelius Hunter –

    Photo – “of all the things I’ve lost, I think I miss my mind the most”
    http://3.bp.blogspot.com/-H-kj.....0/rob4.jpg

  70. 70
    DNA_Jock says:

    kf at 66.

    Great point: informational entropy and “heat” entropy are really, thanks to the statistical underpinnings of 2LOT, inextricably interconnected.
    So how much ice would I need to melt in order to account for the information content of he human genome?

    Also, you do realize, don’t you, that when Clausius writes

    Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.

    the radiation that the earth emits into space would count as a change “connected therewith”. The “coupling” could be any contingent relationship, it doesn’t have to be a piston rod or axle. Although those do feature in many popular examples, which may have caused some confusion…

  71. 71
    CLAVDIVS says:

    Mung:

    Salvador retreats to individual formulations of the second law, but what on earth unifies these individual formulations, and why should ID theorists care if they fail to address “the Clausius formulation” of the 2LOT?

    What unifies these individual formulations is these are the only formulations that are proven as a law of nature, viz. all systems spontaneously increase their thermodynamic entropy, unless they can export it to their surroundings. Thus, in statistical mechanics, the 2nd law is a proven law of nature only for microstates that are measured in terms of energy.

    It is not a proven law of nature that non-thermodynamic entropy, such as information entropy, coin tosses etc, always increases unless exported, because the microstates are not being measured in terms of energy.

  72. 72
    kairosfocus says:

    DNA-J: This was already discussed, the issue missed by those who appeal to the compensation argument — notice what Clausius observed, aptly — is always relevance. You would have to connect the melting ice tot he construction of the genome, brains, embryo etc through energy converters and constructors. Acting at molecular level, the operative level in the living cell. And, those who want to sniff at fishing reels should notice that this is a demonstration with a familiar item, of FSCO/I, which is then articulated to the cellular metabolism network and the protein synthesis part of it. KF

  73. 73
    kairosfocus says:

    Clavdivs,

    Entropy as discussed is a state function linked to molecular configurations of a system, per S = k log W etc.

    As a state function, the specificity of the state is relevant to W, and in this case particular molecular configurations are material as we are addressing function based on config, where the function is evident at a much coarser resolution than the molecular scale of nm, e.g. a bacterial flagellum or the protein synthesis system or other life form relevant FSCO/I rich functions.

    The challenge is, in a darwin pond or the like, to get to such FSCO/I rich configs, by harvesting the available thermal energy, which at this scale goes to random molecular motions.

    The answer is, that the search space challenge is such that the motion to equilibrium trend pushes the system spontaneously away from the sort of highly complex and specific configs required. And, that a search on the gamut of the solar system or observed cosmos is maximally unlikely to find such configs blindly, due to the island of function in a sea of non-functional configs problem; once structured description length exceeds 500 – 1,000 bits.

    This is directly connected to the analysis that grounds 2LOT on molecular statistics considerations, as can readily be seen above for those interested in a sound rather than a rhetorically convenient conclusion.

    Appealing to irrelevant energy flows that normally just go to increased random agitation in a context where the entire resources of the solar system or observed cosmos devoted to such a search would with all but absolute certainty fail to discover such, is a grossly inadequate answer.

    Especially when we have easy access to the only observationally warranted adequate cause of FSCO/I, at both macro and molecular levels, intelligently directed configuration.

    The combination of empirical observation base of trillions and the molecular underpinnings of 2LOT strongly indicate that such FSCO/I is a reliable signature of design. It is quire clear that the reason this is resisted is ideological, not logical or empirical.

    In fact, there simply are no credible cases of such complex wiring diagram functionality coming about in our observation by blind chance and mechanical necessity, whilst there are literally trillions of such by design.

    Indeed, to produce objecting comments, you further extended that base of observations, creating FSCO/I rich comments as symbol strings, by design.

    KF

  74. 74
    CLAVDIVS says:

    kairosfocus:

    This is directly connected to the analysis that grounds 2LOT on molecular statistics considerations, as can readily be seen above for those interested in a sound rather than a rhetorically convenient conclusion.

    ‘Rhetorically convenient’ my foot. We’re talking about what is, and what is not, a recognised law of nature.

    In fact, there simply are no credible cases of such complex wiring diagram functionality coming about in our observation by blind chance and mechanical necessity, whilst there are literally trillions of such by design.

    You have simply bypassed my point altogether. Here it is again: the second law is proven as a law of nature only in relation to the thermodynamics of systems i.e. microstates measured in terms of energy. There is no recognised, proven law of nature of “configuration entropy” or “wiring entropy” or “coin toss entropy”.

    Accordingly, you can only apply the recognised, proven second law of thermodynamics to systems you are measuring in terms of energy and thermodynamic variables like temperature, pressure, etc. If you are measuring configuration, wiring or coin tosses, you cannot apply the recognised second law of thermodynamics.

    That was scordova’s point, which I was pointing out again to Mung.

  75. 75
    kairosfocus says:

    Clavdivs,

    With all due respect, you are simply refusing to address the first facts of the molecular underpinnings of 2LOT, which are directly tied to molecular configurations, e.g. S = k log W. W being in effect the count of number of ways mass and energy can be arranged at micro levels consistent with a macro-observable state.

    Second, FYI, if you were to examine the thread above regarding L K Nash (a not inconsiderable name) and his use of 500 or 1000 coins as an example of configuration, you will find a direct translation to a paramagnetic substance in a weak B field, with two possible alignments of atoms N-up and N-down; based on an example presented by Mandl; again a not inconsiderable name.

    The pattern of the trend to an equilibrium cluster of arrangements near 50-50 distribution emerges, as does the applicability of the binomial distribution.

    Overall, thermodynamics, including 2LOT, is inextricably tied to this statistical-molecular perspective; it has been for over 100 years. (I suggest you take a look at the PS to the OP on this. 2LOT is directly connected to the strong trend of systems to move towards thermodynamic equilibrium reflective of relative weights of clusters of microstates. And, this is not a novelty.)

    That analysis then brings directly to bear, e.g. the presence of functionally specific configs that carry say an ASCII coded message in English of 143 characters and the maximal implausibility of accessing same on blind chance and mechanical necessity precisely because the 10^80 atoms of the observed cosmos, working at 10^14 trials per s and for 10^17 s, can only test 10^111 possibilities of the 10^301, of which the vast majority will be near 50-50 in no particular order.

    In short, blind needle in haystack search follows exactly the pattern elucidated by using 1,000 coins.

    Too much stack, too few needles, far too little search capability to examine more than a tiny, effectively negligible fraction of configs.

    The same point that has been underscored year after year but studiously dismissed or ignored by those whose system of thought demands repeated statistical miracles.

    And feeding in thermal energy to the system will make no difference to the balance of circumstances; the maximum just given is well beyond reasonable actual processing reach. Too much stack, too few needles.

    This is an illustration of why it is so readily observable that FSCO/I is only produced by intelligently directed configuration.

    And, you would do well to recall that ever since the case of 35 Xe atoms spelling out IBM coming on 30 years past, there has been a considerable demonstration of similar cases of intelligently directed configuration producing FSCO/I-rich patterns at that level.

    KF

  76. 76
    kairosfocus says:

    F/N: to highlight the issue of relevance of compensating flows, I have added to the OP a page shot from Clausius’ 1854 paper, as published in English. It should be noted that 2LOT was developed in connexion with steam engines and the like, which give context to the statement. Note, the first technically effective refrigerator was built by Perkins in 1834, and a patent was granted on the first practical design in 1956. Boltzmann’s first work was in the 1870’s and Gibbs published at the turn of C20. Einstein’s Annalen der Physik paper on Brownian motion, which pivots on viewing the particles of pollen etc as partaking of the molecular motion, is 1905. KF

  77. 77
    kairosfocus says:

    F/N: wiki has some interesting articles on entropy in various manifestations, let us clip as summing up typical conventional wisdom and testifying against known general interest; pardon the mangling of neatly presented formulae:

    1: Configuration Entropy:

    >> In statistical mechanics, configuration entropy is the portion of a system’s entropy that is related to the position of its constituent particles [–> at ultramicroscopic level] rather than to their velocity or momentum. It is physically related to the number of ways of arranging all the particles of the system while maintaining some overall set of specified system properties, such as energy.

    It can be shown[1] that the variation of configuration entropy of thermodynamic systems (e.g., ideal gas, and other systems with a vast number of internal degrees of freedom) in thermodynamic processes is equivalent to the variation of the macroscopic entropy defined as dS = d’Q/T, where d’Q is the heat exchanged between the system and the surrounding media, and T is temperature. Therefore configuration entropy is the same as macroscopic entropy. [d’Q used to get it to print] . . . .

    The configurational entropy is related to the number of possible configurations by Boltzmann’s entropy formula

    S = k_B log W

    where kB is the Boltzmann constant and W is the number of possible configurations. In a more general formulation, if a system can be in states n with probabilities Pn, the configurational entropy of the system is given by

    S = – k_B [SUM on i] pi log pi

    which in the perfect disorder limit (all Pn = 1/W) leads to Boltzmann’s formula, while in the opposite limit (one configuration with probability 1), the entropy vanishes. This formulation is analogous to that of Shannon’s information entropy.

    The mathematical field of combinatorics, and in particular the mathematics of combinations and permutations is highly important in the calculation of configurational entropy. In particular, this field of mathematics offers formalized approaches for calculating the number of ways of choosing or arranging discrete objects; in this case, atoms or molecules. However, it is important to note that the positions of molecules are not strictly speaking discrete above the quantum level. Thus a variety of approximations may be used in discretizing a system to allow for a purely combinatorial approach. Alternatively, integral methods may be used in some cases to work directly with continuous position functions . . . >>

    2: Conformational Entropy:

    >> Conformational entropy is the entropy associated with the number of conformations of a molecule. The concept is most commonly applied to biological macromolecules such as proteins and RNA, but also be used for polysaccharides and other molecules. To calculate the conformational entropy, the possible conformations of the molecule may first be discretized into a finite number of states, usually characterized by unique combinations of certain structural parameters, each of which has been assigned an energy. In proteins, backbone dihedral angles and side chain rotamers are commonly used as parameters, and in RNA the base pairing pattern may be used. These characteristics are used to define the degrees of freedom (in the statistical mechanics sense of a possible “microstate”). The conformational entropy associated with a particular structure or state, such as an alpha-helix, a folded or an unfolded protein structure, is then dependent on the probability or the occupancy of that structure.

    The entropy of heterogeneous random coil or denatured proteins is significantly higher than that of the folded native state tertiary structure. In particular, the conformational entropy of the amino acid side chains in a protein is thought to be a major contributor to the energetic stabilization of the denatured state and thus a barrier to protein folding.[1] However, a recent study has shown that side-chain conformational entropy can stabilize native structures among alternative compact structures.[2] The conformational entropy of RNA and proteins can be estimated; for example, empirical methods to estimate the loss of conformational entropy in a particular side chain on incorporation into a folded protein can roughly predict the effects of particular point mutations in a protein. Side-chain conformational entropies can be defined as Boltzmann sampling over all possible rotameric states:[3]

    S = -R {SUM] pi ln pi

    where R is the gas constant and p_{i} is the probability of a residue being in rotamer i.[3]

    The limited conformational range of proline residues lowers the conformational entropy of the denatured state and thus increases the energy difference between the denatured and native states. A correlation has been observed between the thermostability of a protein and its proline residue content.[4] >>

    3: Entropy of mixing

    >>Assume that the molecules of two different substances are approximately the same size, and regard space as subdivided into a square lattice whose cells are the size of the molecules. (In fact, any lattice would do, including close packing.) This is a crystal-like conceptual model to identify the molecular centers of mass. If the two phases are liquids, there is no spatial uncertainty in each one individually. (This is, of course, an approximation. Liquids have a “free volume”. This is why they are (usually) less dense than solids.) Everywhere we look in component 1, there is a molecule present, and likewise for component 2. After the two different substances are intermingled (assuming they are miscible), the liquid is still dense with molecules, but now there is uncertainty about what kind of molecule is in which location. Of course, any idea of identifying molecules in given locations is a thought experiment, not something one could do, but the calculation of the uncertainty is well-defined.

    We can use Boltzmann’s equation for the entropy change as applied to the mixing process

    Delta S_{mix}= k_B ln Omega,

    where k_B, is Boltzmann’s constant. We then calculate the number of ways Omega, of arranging N_1, molecules of component 1 and N_2, molecules of component 2 on a lattice, where

    N = N_1 + N_2,

    is the total number of molecules, and therefore the number of lattice sites. Calculating the number of permutations of N, objects, correcting for the fact that N_1, of them are identical to one another, and likewise for N_2,

    Omega = N!/N_1!N_2!,

    After applying Stirling’s approximation for the factorial of a large integer m:

    ln m! = sum_k ln k ~ [Integral]{1}^{m}dk ln k
    = mln m – m ,

    the result is Delta S_{mix} = -k_B[N_1 *ln(N_1/N) + N_2 * ln(N_2/N)] = -k_B N*[x_1*ln x_1 + x_2*ln x_2],

    where we have introduced the mole fractions, which are also the probabilities of finding any particular component in a given lattice site.

    x_1 = N_1/N = p_1; and x_2 = N_2/N = p_2,

    Since the Boltzmann constant k_B = R / N_A,, where N_A, is Avogadro’s number, and the number of molecules N = n*N_A, we recover the thermodynamic expression for the mixing of two ideal gases,

    Delta S_{mix} = -nR[x_1*ln x_1 + x_2*ln x_2] . . . .

    The entropy of mixing is also proportional to the Shannon entropy or compositional uncertainty of information theory, which is defined without requiring Stirling’s approximation. Claude Shannon introduced this expression for use in information theory, but similar formulas can be found as far back as the work of Ludwig Boltzmann and J. Willard Gibbs. The Shannon uncertainty is completely unrelated to the Heisenberg uncertainty principle in quantum mechanics, and is defined by

    H = – sum_{i) p_i * ln (p_i)

    To relate this quantity to the entropy of mixing, we consider that the summation is over the various chemical species, so that this is the uncertainty about which kind of molecule is in any one site. It must be multiplied by the number of sites N, to get the uncertainty for the whole system. Since the probability p_i, of finding species i, in a given site equals the mole fraction x_i, we again obtain the entropy of mixing on multiplying by the Boltzmann constant k_B.

    Delta S_{mix} = -N k_B*sum_{i}x_i*ln x_i>>

    4: Entropy (information theory)

    >> The inspiration for adopting the word entropy in information theory came from the close resemblance between Shannon’s formula and very similar known formulae from statistical mechanics.

    In statistical thermodynamics the most general formula for the thermodynamic entropy S of a thermodynamic system is the Gibbs entropy,

    S = – k_B [sum] p_i * ln p_i,

    where kB is the Boltzmann constant, and pi is the probability of a microstate. The Gibbs entropy was defined by J. Willard Gibbs in 1878 after earlier work by Boltzmann (1872).[9]

    The Gibbs entropy translates over almost unchanged into the world of quantum physics to give the von Neumann entropy, introduced by John von Neumann in 1927,

    S = – k_B { Tr}(rho ln rho),

    where rho is the density matrix of the quantum mechanical system and Tr is the trace.

    At an everyday practical level the links between information entropy and thermodynamic entropy are not evident. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the minuteness of Boltzmann’s constant kB indicates, the changes in S/kB for even tiny amounts of substances in chemical and physical processes represent amounts of entropy that are extremely large compared to anything in data compression or signal processing. Furthermore, in classical thermodynamics the entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which is central to the definition of information entropy.

    At a multidisciplinary level, however, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamic entropy, as explained by statistical mechanics, should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being proportional to the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states of the system that are consistent with the measurable values of its macroscopic variables, thus making any complete state description longer. (See article: maximum entropy thermodynamics). Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox). Landauer’s principle imposes a lower bound on the amount of heat a computer must generate to process a given amount of information, though modern computers are far less efficient . . . >>

    5: Entropy in thermodynamics and information theory

    >> Theoretical relationship

    Despite the foregoing, there is a difference between the two quantities. The information entropy H can be calculated for any probability distribution (if the “message” is taken to be that the event i which had probability pi occurred, out of the space of the events possible), while the thermodynamic entropy S refers to thermodynamic probabilities pi specifically. The difference is more theoretical than actual, however, because any probability distribution can be approximated arbitrarily closely by some thermodynamic system.

    Moreover, a direct connection can be made between the two. If the probabilities in question are the thermodynamic probabilities pi: the (reduced) Gibbs entropy sigma can then be seen as simply the amount of Shannon information needed to define the detailed microscopic state of the system, given its macroscopic description. Or, in the words of G. N. Lewis writing about chemical entropy in 1930, “Gain in entropy always means loss of information, and nothing more”. To be more concrete, in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes–no questions needed to be answered in order to fully specify the microstate, given that we know the macrostate.

    Furthermore, the prescription to find the equilibrium distributions of statistical mechanics—such as the Boltzmann distribution—by maximising the Gibbs entropy subject to appropriate constraints (the Gibbs algorithm) can be seen as something not unique to thermodynamics, but as a principle of general relevance in statistical inference, if it is desired to find a maximally uninformative probability distribution, subject to certain constraints on its averages. (These perspectives are explored further in the article Maximum entropy thermodynamics.)

    Information is physical

    A physical thought experiment demonstrating how just the possession of information might in principle have thermodynamic consequences was established in 1929 by Leó Szilárd, in a refinement of the famous Maxwell’s demon scenario.

    Consider Maxwell’s set-up, but with only a single gas particle in a box. If the supernatural demon knows which half of the box the particle is in (equivalent to a single bit of information), it can close a shutter between the two halves of the box, close a piston unopposed into the empty half of the box, and then extract k_B T * ln 2 joules of useful work if the shutter is opened again. The particle can then be left to isothermally expand back to its original equilibrium occupied volume. In just the right circumstances therefore, the possession of a single bit of Shannon information (a single bit of negentropy in Brillouin’s term) really does correspond to a reduction in the entropy of the physical system. The global entropy is not decreased, but information to energy conversion is possible.

    Using a phase-contrast microscope equipped with a high speed camera connected to a computer, as demon, the principle has been actually demonstrated.[2] In this experiment, information to energy conversion is performed on a Brownian particle by means of feedback control; that is, synchronizing the work given to the particle with the information obtained on its position. Computing energy balances for different feedback protocols, has confirmed that the Jarzynski equality requires a generalization that accounts for the amount of information involved in the feedback. >>

    In short, once molecular statistics, probability and information considerations are factored in, we begin to see deep connexions. Those connexions point to how the particular aspect of configuration, FSCO/I becomes significant. For, as can readily be shown, a blind, sol system or observed cosmos scope needle in haystack chance and necessity search of config space is maximally implausible to find deeply isolated islands of function. And, this is closely tied to the molecular underpinnings of 2LOT.

    KF

  78. 78
    CLAVDIVS says:

    kairosfocus:

    None of that contradicts what I said to Mung. There is no rigorously proven law that ‘configuration entropy’ or ‘information entropy’ must spontaneously increase unless exported. The second law only applies to thermodynamic entropy, which is the point that unifies scordova’s formulations … namely, they all relate to thermodynamic systems only.

    I don’t disagree whatsoever that many non-thermodynamic systems tend spontaneously to their most probable arrangement of microstates. However, that tendency cannot be described as being due to the second law of thermodynamics, because they’re not thermodynamic systems.

  79. 79
    kairosfocus says:

    Clavdivs,

    I simply note for record, expecting that the below will be clear enough that the unbiased onlooker will readily see the balance on the merits.

    I no longer expect that committed evolutionary materialists and fellow travellers will yield to facts, evidence or argument. Absent, the sort of utter system collapse that overtook the similarly ideologically locked in Marxists at the turn of the 1990’s.

    So, for record.

    First, it has been adequately shown how the statistical-molecular picture shows how S becomes times’s arrow. (Cf the PS, OP for a derivation of 2LOT from it. For over 100 years, this law has been inextricable from that grounding.)

    And, the unmet challenge — unmet on trillions of examples observed — for FSCO/I to arise from blind needle in haystack search across config spaces is directly tied to the same considerations.

    As just one instance, the very same balance of statistical weights of clusters of microstates issue is what grounds why on release of a partition between A and B a gas will expand to fill A + B rapidly (increasing its entropy also . . . ) but we would have to wait many times the duration of the cosmos to date before we would see a momentary, spontaneous return of the molecules back into A alone, leaving B empty.

    But of course, it is trivially easy to push the molecules into A again by intelligently directed configuration. The reason is, that the numbers of states in which the gas is spread across A + B astronomically outweighs the number where (absent imposed constraint) the same molecules spontaneously occupy only A.

    And, that is only for ORDER, not functionally specific, complex, interactive organisation per wiring diagram and associated information, FSCO/I for short.

    Likewise, the coins example or its 1,000 atom paramagnetic substance in a weak magnetic field micro-analogue shows us through an easily analysed example how a predominant group cluster emerges and “sticks.” It is also directly relevant to descriptive strings that are informationally equivalent to wiring diagram organised entities. This case is therefore WLOG.

    The predominant cluster readily emerges and is near-50-50 clusters with a small access to fluctuations of several SD, but the SD will be quite small, sqrt npq.

    In addition, though many coded strings that map to functional wiring diagrams will be in that range, they will be drastically overwhelmed by the utterly dominant near 50-50 in no particular meaningful or functional pattern. That is, we again see the familiar pattern, deeply isolated islands of function — wiring diagram patterns sharply constrain clusters of functional configs — in vast config spaces not thoroughly searchable on atomic-temporal resources of the sol system or observed cosmos. Where, the paramagnetic substance case is beyond any reasonable dispute a thermodynamic system. One that is directly comparable to a tray of 1,000 coins. So, the analysis for the one applies to the other, the difference being we readily see coins.

    The molecular-statistical stat thermo-D analysis is thermodynamic and is inextricably tied to the macro level classical forms. Where the key analysis is fundamentally the same as that which grounds 2LOT on relative statistical weights and tendency to gravitate to predominant group.

    As has been so often pointed out but studiously dismissed, ignored or brushed aside by the deeply indoctrinated.

    The matter is so plain that the basic challenge is, demonstrate spontaneous origin of FSCO/I per observation on blind chance and/or mechanical necessity as a reasonable case; or else acknowledge the overwhelming fact on trillions of cases, that FSCO/I has just one routinely observed adequate cause: intelligently directed configuration.

    AKA, design.

    As in, vera causa.

    KF

    PS: Prediction, onlookers: advocates of evolutionary materialism and/or fellow travellers will continue to fail to demonstrate such causal adequacy of blind chance and mechanical necessity, but will insist on locking out the demonstrated adequate cause for FSCO/I in explaining origins.

  80. 80
    CLAVDIVS says:

    kairosfocus:

    I cannot tell from your latest whether you agree that the second law of thermodynamics only applies to thermodynamic macrostates and microstates.

    Do you?

  81. 81
    kairosfocus says:

    Clavdivs, 2LOT is a direct consequence of the microstate analysis of molecular statistics, which is more fundamental. The case with paramagnetism will show that heat is not the only relevant context, and indeed heat flow into a system basically means that there is more energy spreading randomly to explore the space of possibilities for mass and energy to be arranged at micro level. From the beginning of this exchange I have stressed that micro analysis and its range of implications. One of these happens to bear the label 2LOT. Another, is the basis for a point Clausius made in 1854, translated into English 1856, i.e. the hoped for compensations associated with reduction of entropy to form FSCO/I rich clusters, must be relevantly connected causally and temporally. The common dismissal by appeal to irrelevant flows, fails. KF

    PS: I am led to wonder to what extent there is familiarity with that micro view.

  82. 82
    Mung says:

    That a law of thermodynamics should be applicable to a thermodynamic system is hardly news.

    … the second law of thermodynamics only applies to thermodynamic macrostates and microstates.

    Missing the point, entirely.

  83. 83
    Mung says:

    Zeroth law of thermodynamics: If two systems are in thermal equilibrium respectively with a third system, they must be in thermal equilibrium with each other. This law helps define the notion of temperature.

    If a = b, and b = c, then a = c. Irrespective of any laws of thermodynamics. We don’t need a law of thermodynamics to tell us so.

Leave a Reply