Uncommon Descent Serving The Intelligent Design Community

Should ID supporters argue in terms of thermodynamics or information or [“basic . . . “] probability?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In the still active discussion thread on failure of compensation arguments, long term maverick ID (and, I think, still YEC-sympathetic) supporter SalC comments:

SalC, 570:    . . .  I’ve argued against using information theory type arguments in defense of ID, it adds way too much confusion. Basic probability will do the job, and basic probability is clear and unassailable.

The mutliplicities of interest to ID proponents don’t vary with temperature, whereas the multiplicities from a thermodynamic perspective change with temperature. I find that very problematic for invoking 2LOT in defense of ID.

Algorithmically controlled metabolisms (such as realized in life) are low multiplicity constructs as a matter of principle. They are high in information content. But why add more jargon and terminology?

Most people understand “complex-computer-like machines such a living creatures are far from the expected outcome of random processes”. This is a subtle assertion of LLN [ –> The Law of Large Numbers in Statistics]. This is a different way of posing the Humpty Dumpty problem.

There are an infinite number of ways to make lock-and-key or login/password systems, but just because there are infinite number of ways to do this does not make them highly probable from random processes . . . . Why invoke 2LOT? Why invoke fancy terms that add confusion? One deliberately uses unclear arguments when one wishes to obfuscate, but why be unclear when evidence is on one’s side? . . . .

LLN and variations of Jonathan Wells Humpty Dumpty argument I think have proven themselves in the arena of debate. Maybe even more tersely, extrapolating Pasteur, “life does not come from non-life.”

In response, while SalC has a point where a basic and non-controversial exchange is in hand, we usually do not have this luxury and must address the technical issues to some significant extent. When we do so all of the above factors at once come out as differing facets of the same issue.

Facets, that inextricably interact as parts of a whole.

Analogous to say the hearts and arrows effect with suitably, very specifically cut and polished diamonds:

hearts_arrows_viewer

In short, once we begin to probe beneath the surface, design thought shows itself to be . . . irreducibly complex.

Accordingly, I responded as follows at 592 , which I think I should headline and augment:

_______________

KF, 592: >>Pardon, but I have a different take: on years of observing, any serious design argument will be twisted into pretzels, strawmannised, confused, clouded and generally distorted and dismissed by the sort of ruthlessly determined and too often amoral or outright nihilistic, truth and fairness disregarding objectors we frequently face.

This is because, too many such are “any means necessary”/”ends justify means” committed ideologues full of agit-prop talking points and agendas.

That’s exactly how the trained, indoctrinated Marxist agitators of my youth operated. Benumbed in conscience, insensitive to truth, fundamentally rage-blinded [even when charming], secure in their notion that they were the vanguard of the future/progress, and that they were championing pure victims of the oppressor classes who deserved anything they got.

(Just to illustrate the attitude, I remember one who accused me falsely of theft of an item of equipment kept in my lab. I promptly had it signed over to the Student Union once I understood the situation, then went to her office and confronted her with the sign off. How can you be so thin skinned was her only response; taking full advantage of the rule that men must restrain themselves in dealing with women, however outrageous the latter, and of course seeking to further wound. Ironically, this champion of the working classes was from a much higher class-origin than I was . . . actually, unsurprisingly. To see the parallels, notice how often not only objectors who come here but the major materialist agit-prop organisations — without good grounds — insinuate calculated dishonesty and utter incompetence to the point that we should not have been able to complete a degree, on our part.)

I suggest, first, that the pivot of design discussions on the world of life is functionally specific, complex interactive Wicken wiring diagram organisation of parts that achieve a whole performance based on particular arrangement and coupling, and associated information. Information that is sometimes explicit (R/DNA codes) or sometimes may be drawn out by using structured Y/N q’s that describe the wiring pattern to achieve function.

FSCO/I, for short.

{Aug. 1:}  Back to Reels to show the basic “please face and acknowledge facts” reality of FSCO/I , here the Penn International Trolling Reel exploded view:

Penn_intl_50_expl_vw

. . . and a video showing the implications of this “wiring diagram” for how it is put together in the factory:

[youtube TTqzSHZKQ1k]

. . . just, remember, the arm-hand system is a complex, multi-axis cybernetic manipulator-arm:

ArmModelLabel

This concept is not new, it goes back to Orgel 1973:

. . . In brief, living organisms [–> functional context] are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . .

[HT, Mung, fr. p. 190 & 196:] These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [–> this is of course equivalent to the string of yes/no questions required to specify the relevant “wiring diagram” for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002 . . . ] One can see intuitively that many instructions are needed to specify a complex structure. [–> so if the q’s to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions. [–> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes. [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196.]

. . . as well as Wicken, 1979:

‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]

. . . and is pretty directly stated by Dembski in NFL:

p. 148:“The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.

I submit that what they have in mind is specified complexity, or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .

Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. . . . In virtue of their function [[a living organism’s subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole. Dembski cites:

Wouters, p. 148: “globally in terms of the viability of whole organisms,”

Behe, p. 148: “minimal function of biochemical systems,”

Dawkins, pp. 148 – 9: “Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by ran-| dom chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction.”

On p. 149, he roughly cites Orgel’s famous remark from 1973, which exactly cited reads:

In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . .

And, p. 149, he highlights Paul Davis in The Fifth Miracle: “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity.”] . . .”

p. 144: [[Specified complexity can be more formally defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”

What happens at relevant cellular level, is that this comes down to highly endothermic C-Chemistry, aqueous medium context macromolecules in complexes that are organised to achieve highly integrated and specific interlocking functions required for metabolising, self replicating cells to function.

self_replication_mignea

This implicates huge quantities of information manifest in the highly specific functional organisation. Which is observable on a much coarser resolution than the nm range of basic molecular interactions. That is we see tightly constrained clusters of micro-level arrangements — states — consistent with function, as opposed to the much larger numbers of possible but overwhelmingly non-functional ways the same atoms and monomer components could be chemically and/or physically clumped “at random.” In turn, that is a lot fewer ways than the same could be scattered across a Darwin’s pond or the like.

{Aug. 2} For illustration let us consider the protein synthesis process at gross level:

Proteinsynthesis

. . . spotlighting and comparing the ribosome in action as a coded tape Numerically Controlled machine:

fscoi_facts

. . . then at a little more zoomed in level:

Protein Synthesis (HT: Wiki Media)
Protein Synthesis (HT: Wiki Media)

. . . then in the wider context of cellular metabolism [protein synthesis is the little bit with two call-outs in the top left of the infographic]:

cell_metabolism

Thus, starting from the “typical” diffused condition, we readily see how a work to clump at random emerges, and a further work to configure in functionally specific ways.

With implications for this component of entropy change.

As well as for the direction of the clumping and assembly process to get the right parts together, organised in the right cluster of ways that are consistent with function.

Thus, there are implications of prescriptive information that specifies the relevant wiring diagram. (Think, AutoCAD etc as a comparison.)

Pulling back, we can see that to achieve such, the reasonable — and empirically warranted — expectation, is

a: to find energy, mass and information sources and flows associated with

b: energy converters that provide shaft work or controlled flows [I use a heat engine here but energy converters are more general than that], linked to

A heat Engine partially converts heat into workc: constructors that carry out the particular work, under control of

d: relevant prescriptive information that explicitly or implicitly regulates assembly to match the wiring diagram requisites of function,

A von Neumann kinematic self-replicator
A von Neumann kinematic self-replicator

. . . [u/d Apr 13] or, comparing an contrasting a Maxwell Demon model that imposes organisation by choice with use of mechanisms, courtesy Abel:

max_vs_spontFFEq

. . . also with

e: exhaust or dissipation otherwise of degraded energy [typically, but not only, as heat . . . ] and discarding of wastes. (Which last gives relevant compensation where dS cosmos rises. Here, we may note SalC’s own recent cite on that law from Clausius, at 570 in the previous thread that shows what “relevant” implies: Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.)

{Added, April 19, 2015: Clausius’ statement:}

Clausius_1854

By contrast with such, there seems to be a strong belief that irrelevant mass and/or energy flows without coupled converters, constructors and prescriptive organising information, through phenomena such as diffusion and fluctuations can somehow credibly hit on a replicating entity that then can ratchet up into a full encapsulated, gated, metabolising, algorithmic code using self replicating cell.

Such is thermodynamically — yes, thermodynamically, informationally and probabilistically [loose sense] utterly implausible. And, the sort of implied genes first/RNA world, or alternatively metabolism first scenarios that have been suggested are without foundation in empirically observed adequate cause tracing only to blind chance and mechanical necessity.

{U/D, Apr 13:} Abel 2012 makes much the same point, in his book chapter, MOVING ‘FAR FROM EQUILIBRIUM’ IN A PREBIOTIC ENVIRONMENT: The role of Maxwell’s Demon in life origin :

Mere heterogeneity and/or order do not even begin to satisfy the necessary and sufficient conditions for life. Self-ordering tendencies provide no mechanism for self-organization, let alone abiogenesis. All sorts of physical astronomical “clumping,” weak-bonded molecular alignments, phase changes, and outright chemical reactions occur spontaneously in nature that have nothing to do with life. Life is organization-based, not order-based. As we shall see below in Section 6, order is poisonous to organization.

Stochastic ensembles of nucleotides and amino acids can polymerize naturalistically (with great difficulty). But functional sequencing of those monomers cannot be determined by any fixed physicodynamic law. It is well-known that only one 150-mer polyamino acid string out of 10^74 stochastic ensembles folds into a tertiary structure with any hint of protein function (Axe, 2004). This takes into full consideration the much publicized substitutability of amino acids without loss of function within a typical protein family membership. The odds are still only one functional protein out of 10^74 stochastic ensembles. And 150 residues are of minimal length to qualify for protein status. Worse yet, spontaneously condensed Levo-only peptides with peptide-only bonds between only biologically useful amino acids in a prebioitic environment would rarely exceed a dozen mers in length. Without polycodon prescription and sophisticated ribosome machinery, not even polypeptides form that would contribute much to “useful biological work.” . . . .

There are other reasons why merely “moving far from equilibrium” is not the key to life as seems so universally supposed. Disequilibrium stemming from mere physicodynamic constraints and self-ordering phenomena would actually be poisonous to life-origin (Abel, 2009b). The price of such constrained and self-ordering tendencies in nature is the severe reduction of Shannon informational uncertainty in any physical medium (Abel, 2008b, 2010a). Self-ordering processes preclude information generation because they force conformity and reduce freedom of selection. If information needs anything, it is the uncertainty made possible by freedom from determinism at true decisions nodes and logic gates. Configurable switch-settings must be physicodynamically inert (Rocha, 2001; Rocha & Hordijk, 2005) for genetic programming and evolution of the symbol system to take place (Pattee, 1995a, 1995b). This is the main reason that Maxwell’s Demon model must use ideal gas molecules. It is the only way to maintain high uncertainty and freedom from low informational physicochemical determinism. Only then is the control and regulation so desperately needed for organization and life-origin possible. The higher the combinatorial possibilities and epistemological uncertainty of any physical medium, the greater is the information recordation potential of that matrix.

Constraints and law-like behavior only reduce uncertainty (bit content) of any physical matrix. Any self-ordering tendency precludes the freedom from law needed to program logic gates and configurable switch settings. The regulation of life requires not only true decision nodes, but wise choices at each decision node. This is exactly what Maxwell’s Demon does. No yet-to-be discovered physicodynamic law will ever be able to replace the Demon’s wise choices, or explain the exquisite linear digital PI programming and organization of life (Abel, 2009a; Abel & Trevors, 2007). Organization requires choice contingency rather than chance contingency or law (Abel, 2008b, 2009b, 2010a). This conclusion comes via deductive logical necessity and clear-cut category differences, not just from best-thus-far empiricism or induction/abduction.

In short, the three perspectives converge. Thermodynamically, the implausibility of finding information rich FSCO/I in islands of function in vast config spaces . . .

csi_defn . . . — where we can picture the search by using coins as stand-in for one-bit registers —

sol_coin_flipr

. . . links directly to the overwhelmingly likely outcome of spontaneous processes. Such is of course a probabilistically liked outcome. And, information is often quantified on the same probability thinking.

Taking a step back to App A my always linked note, following Thaxton Bradley and Olson in TMLO 1984 and amplifying a bit:

. . . Going forward to the discussion in Ch 8, in light of the definition dG = dH – Tds, we may then split up the TdS term into contributing components, thusly:

First, dG = [dE + PdV] – TdS . . . [Eqn A.9, cf def’ns for G, H above]

But, [1] since pressure-volume work [–> the PdV term] may be seen as negligible in the context we have in mind, and [2] since we may look at dE as shifts in bonding energy [which will be more or less the same in DNA or polypeptide/protein chains of the same length regardless of the sequence of the monomers], we may focus on the TdS term. This brings us back to the clumping then configuring sequence of changes in entropy in the Micro-Jets example above:

dG = dH – T[dS”clump” +dSconfig] . . . [Eqn A.10, cf. TBO 8.5]

Of course, we have already addressed the reduction in entropy on clumping and the further reduction in entropy on configuration, through the thought expt. etc., above. In the DNA or protein formation case, more or less the same thing happens. Using Brillouin’s negentropy formulation of information, we may see that the dSconfig is the negative of the information content of the molecule.

A bit of back-tracking will help:

S = k ln W . . . Eqn A.3

{U/D Apr 19: Boltzmann’s tombstone}

Boltzmann_equation

Now, W may be seen as a composite of the ways energy as well as mass may be arranged at micro-level. That is, we are marking a distinction between the entropy component due to ways energy [here usually, thermal energy] may be arranged, and that due to the ways mass may be configured across the relevant volume. The configurational component arises from in effect the same considerations as lead us to see a rise in entropy on having a body of gas at first confined to part of an apparatus, then allowing it to freely expand into the full volume:

Free expansion:

|| * * * * * * * * | . . . . .  ||

Then:

|| * * * * * * * * ||

Or, as Prof Gary L. Bertrand of university of Missouri-Rollo summarises:

The freedom within a part of the universe may take two major forms: the freedom of the mass and the freedom of the energy. The amount of freedom is related to the number of different ways the mass or the energy in that part of the universe may be arranged while not gaining or losing any mass or energy. We will concentrate on a specific part of the universe, perhaps within a closed container. If the mass within the container is distributed into a lot of tiny little balls (atoms) flying blindly about, running into each other and anything else (like walls) that may be in their way, there is a huge number of different ways the atoms could be arranged at any one time. Each atom could at different times occupy any place within the container that was not already occupied by another atom, but on average the atoms will be uniformly distributed throughout the container. If we can mathematically estimate the number of different ways the atoms may be arranged, we can quantify the freedom of the mass. If somehow we increase the size of the container, each atom can move around in a greater amount of space, and the number of ways the mass may be arranged will increase . . . .

The thermodynamic term for quantifying freedom is entropy, and it is given the symbol S. Like freedom, the entropy of a system increases with the temperature and with volume . . . the entropy of a system increases as the concentrations of the components decrease. The part of entropy which is determined by energetic freedom is called thermal entropy, and the part that is determined by concentration is called configurational entropy.”

In short, degree of confinement in space constrains the degree of disorder/”freedom” that masses may have. And, of course, confinement to particular portions of a linear polymer is no less a case of volumetric confinement (relative to being free to take up any location at random along the chain of monomers) than is confinement of gas molecules to one part of an apparatus. And, degree of such confinement may appropriately be termed, degree of “concentration.”

Diffusion is a similar case: infusing a drop of dye into a glass of water — the particles spread out across the volume and we see an increase of entropy there. (The micro-jets case of course is effectively diffusion in reverse, so we see the reduction in entropy on clumping and then also the further reduction in entropy on configuring to form a flyable microjet.)

So, we are justified in reworking the Boltzmann expression to separate clumping/thermal and configurational components:

S = k ln (Wclump*Wconfig)

= k lnWth*Wc . . . [Eqn A.11, cf. TBO 8.2a]

or, S = k ln Wth + k ln Wc = Sth + Sc . . . [Eqn A.11.1]

We now focus on the configurational component, the clumping/thermal one being in effect the same for at-random or specifically configured DNA or polypeptide macromolecules of the same length and proportions of the relevant monomers, as it is essentially energy of the bonds in the chain, which are the same in number and type for the two cases. Also, introducing Brillouin’s negentropy formulation of Information, with the configured macromolecule [m] and the random molecule [r], we see the increment in information on going from the random to the functionally specified macromolecule:

IB = -[Scm – Scr] . . . [Eqn A.12, cf. TBO 8.3a]

Or, IB = Scr – Scm = k ln Wcr – k ln Wcm

= k ln (Wcr/Wcm) . . . [Eqn A12.1.]

Where also, for N objects in a linear chain, n1 of one kind, n2 of another, and so on to ni, we may see that the number of ways to arrange them (we need not complicate the matter by talking of Fermi-Dirac statistics, as TBO do!) is:

W = N!/[n1!n2! . . . ni!] . . . [Eqn A13, cf TBO 8.7]

So, we may look at a 100-monomer protein, with as an average 5 each of the 20 types of amino acid monomers along the chain , with the aid of log manipulations — take logs to base 10, do the sums in log form, then take back out the logs — to handle numbers over 10^100 on a calculator:

Wcr = 100!/[(5!)^20] = 1.28*10^115

For the sake of initial argument, we consider a unique polymer chain , so that each monomer is confined to a specified location, i.e Wcm = 1, and Scm = 0. This yields — through basic equilibrium of chemical reaction thermodynamics (follow the onward argument in TBO Ch 8) and the Brillouin information measure which contributes to estimating the relevant Gibbs free energies (and with some empirical results on energies of formation etc) — an expected protein concentration of ~10^-338 molar, i.e. far, far less than one molecule per planet. (There may be about 10^80 atoms in the observed universe, with Carbon a rather small fraction thereof; and 1 mole of atoms is ~ 6.02*10^23 atoms. ) Recall, known life forms routinely use dozens to hundreds of such information-rich macromolecules, in close proximity in an integrated self-replicating information system on the scale of about 10^-6 m.

Of course, if one comes at the point from any of these directions, the objections and selectively hyperskeptical demands will be rolled out to fire off salvo after salvo of objections. Selective, as the blind chance needle in haystack models that cannot pass vera causa as a test, simply are not subjected to such scrutiny and scathing dismissiveness by the same objectors. When seriously pressed, the most they are usually prepared to concede, is that perhaps we don’t yet know enough, but rest assured “Science” will triumph so don’t you dare put up “god of the gaps” notions.

To see what I mean, notice [HT: BA 77 et al] the bottomline of a recent article on OOL conundrums:

. . . So the debate rages on. Over the past few decades scientists have edged closer to understanding the origin of life, but there is still some way to go, which is probably why when Robyn Williams asked Lane, ‘What was there in the beginning, do you think?’, the scientist replied wryly: ‘Ah, “think”. Yes, we have no idea, is the bottom line.’

But in fact, adequate cause for FSCO/I is not hard to find: intelligently directed configuration meeting requisites a – e just above. Design.

There are trillions of cases in point.

And that is why I demand that — whatever flaws, elaborations, adjustments etc we may find or want to make — we need to listen carefully and fairly to Granville Sewell’s core point:

The-Emperor-has-no-clothes-illustration-8x61
You are under arrest, for bringing the Emperor into disrepute . . .

. . . The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur.

The discovery that life on Earth developed through evolutionary “steps,” coupled with the observation that mutations and natural selection — like other natural forces — can cause (minor) change, is widely accepted in the scientific world as proof that natural selection — alone among all natural forces — can create order out of disorder, and even design human brains, with human consciousness. Only the layman seems to see the problem with this logic. In a recent Mathematical Intelligencer article [“A Mathematician’s View of Evolution,” The Mathematical Intelligencer 22, number 4, 5-7, 2000] I asserted that the idea that the four fundamental forces of physics alone could rearrange the fundamental particles of Nature into spaceships, nuclear power plants, and computers, connected to laser printers, CRTs, keyboards and the Internet, appears to violate the second law of thermodynamics in a spectacular way.1 . . . .

What happens in a[n isolated] system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. As I wrote in “Can ANYTHING Happen in an Open System?”, “order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door…. If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth’s atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here.” Evolution is a movie running backward, that is what makes it special.

THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn’t, that atoms would rearrange themselves into spaceships and computers and TV sets . . . [NB: Emphases added. I have also substituted in isolated system terminology as GS uses a different terminology.]

Surely, there is room to listen, and to address concerns on the merits. >>

_______________

I think we need to appreciate that the design inference applies to all three of thermodynamics, information and probability, and that we will find determined objectors who will attack all three in a selectively hyperskeptical manner.  We therefore need to give adequate reasons for what we hold, for the reasonable onlooker. END

PS: As it seems unfortunately necessary, I here excerpt the Wikipedia “simple” summary derivation of 2LOT from statistical mechanics considerations as at April 13, 2015 . . . a case of technical admission against general interest, giving the case where distributions are not necessarily equiprobable. This shows the basis of the point that for over 100 years now, 2LOT has been inextricably rooted in statistical-molecular considerations (where, it is those considerations that lead onwards to the issue that FSCO/I, which naturally comes in deeply isolated islands of function in large config spaces, will be maximally implausible to discover through blind, needle in haystack search on chance and mechanical necessity):

Wiki_2LOT_fr_stat_mWith this in hand, I again cite a favourite basic College level Physics text, as summarised in my online note App I:

Yavorski and Pinski, in the textbook Physics, Vol I [MIR, USSR, 1974, pp. 279 ff.], summarise the key implication of the macro-state and micro-state view well: as we consider a simple model of diffusion, let us think of ten white and ten black balls in two rows in a container. There is of course but one way in which there are ten whites in the top row; the balls of any one colour being for our purposes identical. But on shuffling, there are 63,504 ways to arrange five each of black and white balls in the two rows, and 6-4 distributions may occur in two ways, each with 44,100 alternatives. So, if we for the moment see the set of balls as circulating among the various different possible arrangements at random, and spending about the same time in each possible state on average, the time the system spends in any given state will be proportionate to the relative number of ways that state may be achieved. Immediately, we see that the system will gravitate towards the cluster of more evenly distributed states. In short, we have just seen that there is a natural trend of change at random, towards the more thermodynamically probable macrostates, i.e the ones with higher statistical weights. So “[b]y comparing the [thermodynamic] probabilities of two states of a thermodynamic system, we can establish at once the direction of the process that is [spontaneously] feasible in the given system. It will correspond to a transition from a less probable to a more probable state.” [p. 284.] This is in effect the statistical form of the 2nd law of thermodynamics. Thus, too, the behaviour of the Clausius isolated system above [with interacting sub-systemd A and B that transfer d’Q to B due to temp. difference] is readily understood: importing d’Q of random molecular energy so far increases the number of ways energy can be distributed at micro-scale in B, that the resulting rise in B’s entropy swamps the fall in A’s entropy. Moreover, given that [FSCO/I]-rich micro-arrangements are relatively rare in the set of possible arrangements, we can also see why it is hard to account for the origin of such states by spontaneous processes in the scope of the observable universe. (Of course, since it is as a rule very inconvenient to work in terms of statistical weights of macrostates [i.e W], we instead move to entropy, through s = k ln W. Part of how this is done can be seen by imagining a system in which there are W ways accessible, and imagining a partition into parts 1 and 2. W = W1*W2, as for each arrangement in 1 all accessible arrangements in 2 are possible and vice versa, but it is far more convenient to have an additive measure, i.e we need to go to logs. The constant of proportionality, k, is the famous Boltzmann constant and is in effect the universal gas constant, R, on a per molecule basis, i.e we divide R by the Avogadro Number, NA, to get: k = R/NA. The two approaches to entropy, by Clausius, and Boltzmann, of course, correspond. In real-world systems of any significant scale, the relative statistical weights are usually so disproportionate, that the classical observation that entropy naturally tends to increase, is readily apparent.)

This underlying context is easily understood and leads logically to 2LOT as an overwhelmingly likely consequence. Beyond a reasonable scale, fluctuations beyond a very narrow range are statistical miracles, that we have no right to expect to observe.

And, that then refocusses the issue of connected, concurrent energy flows to provide compensation for local entropy reductions.

 

Comments
SalC & Mung: An interesting exchange; three mavericks together I'd say. I would note again, that for 100+ years, 2LOT has been inextricably interconnected with the statistical view at ultramicroscopic scale. It is in this context that probabilistic, informational and thermodynamics facets have been interconnected. To the point that to try to pull one aspect out in exclusion to the others is a hopeless exercise. What I agree with is that this stuff gets technical really fast so that only those with an adequate background should engage such in contexts where technicalities are likely to come out. You need to know enough statistics and background math, probability, classical and statistical thermodynamics and information theory to follow what is going on. That is a tall order and is basically calling for someone with an applied physics-engineering background with a focus on electronics and telecommunications. If you do not know what a Fermi level is or cannot address noise factor/figure or temperature, or the informational entropy of a source or why info gets a neg log probability metric, you probably don't have enough. Likewise, if you do not know how S = k*log W [or better yet upper case Omega] means and how it comes to be that way, or the difference between a microstate and a macro state you don't have enough. A good test is whether you can follow the arguments in chs 7 - 9 of Thaxton et al in TMLO. All of this is why I normally keep such matters as background. (It is only because they were put on the table again, that I have taken them up.) But that does not mean they should not be done. They should. You will also notice that I never of my own accord talk in terms of 2LOT forbids X in isolation. I will speak in terms of the statistical mechanical underpinnings that ground it, notice for instance my discussion App 1 my briefing note, where I go straight to a model that is statistical, though discussed qualitatively: http://www.angelfire.com/pro/kairosfocus/resources/Info_design_and_science.htm#thermod And I will imply or outright address fluctuations and point out that beyond a certain modest scale, large fluctuations from thermodynamic equilibrium resting on relative statistical weight of clusters of microstates, will be so overwhelmed statistically that they are maximally unlikely to be observed on the gamut of sol system or observed cosmos. Even the now habitual insistence on observed cosmos is a technically backed point; scientifically and philosophically. Likewise things like speaking of biological or cell based life, etc. Now, taking 500 coins on a sol sys scale, and tossing at random, it is easy to see the configuration space is 2^500 possibilities. Whilst if each of the 10^57 atoms in the sol system were made into an observer and were given an array of 500 coins, flipped, examined and recorded every 10^-13 or 10^-14 s (fast atomic chem interaction rates) for 10^17 s [big bang linked] we would be looking at sampling say 10^ [13 + 17 + 57] = 10^87 possibilities, with replacement. Taking that as a straw, we could represent 3.27*10^150 possibilities for 500 coins as a cubical haystack comparably thick as our galaxy, several hundred LY. Blind needle in haystack search at that level is negligibly different from no search of consequence and we would have no right to expect to pick up any reasonably isolated zones in the config space. Too much stack, too few needles, far too little search. Where, as the Wicken wiring diagram approach to understanding what FSCO/I shows, functional configs will be very tightly constrained and so will come in isolated zones, islands of function. Shake up a bag of Penn International 50 parts in a bag as long as you please, you will not build a functional reel. 500 H or similar configs fall in that context: specific, simply describable or observable as such, tightly constrained as to accept/reject. Maximally unlikely to be observed by blind needle in stack search, in a context where the overwhelming bulk of possibilities will be in the cluster near 50-50, in no particular order. Likewise hitting on a string that gives 72 characters in ASCII code in contextualy relevant English text is maximally unlikely by such blind search. It matters not whether pure chance or chance plus necessity [biased chance or chance with directional drift . . . ], so long as genuinely blind. Where, all of this will be instantly familiar, and uses very familiar themes that are culturally broadly accessible. Save, that the idea of a configuration space [a cut down phase space with momentum left off, i.e. a state space] will need explanation or illustration. Likewise the principle that a structured set of Y/N q's -- a description language -- can specify the wiring diagram for an entity. But, reference to what AutoCAD etc do will instantly provide context. The first direct implication is that once such is understood, the OOL challenge moves to focus. The root of the tree of life. Start with physics and chemistry in a pond or the like and get to viable architectures for life in plausible and empirically, observationally warranted steps. That is, as OOL is temporally inaccessible, show causal adequacy of proposed blind watchmaker mechanisms in the here and now; as a logical and epistemologically controlled restraint on ideological speculation. Silence. Next, address the issue of OO body plans, let's begin to just use OOBP. This is the relevant macroevolutionary context. The trend is to want to get away with extrapolations from micro changes and/or to smuggle in the notion that it's all within a continent of incrementally accessible function, with smoothly accessible fitness peaks. So, the matter pivots on pointing out the reality of islands of function, with search challenges even more constrained as we now deal with Earth's biosphere. Where, again, FSCO/I comes in islands deeply isolated in config spaces. Proteins in AA sequence space being a good study example. All of this has been done for years. Indeed, here is Meyer in his 2004 article in PBSW, which was drowned out by pushing an artificial sea of controversy and career busting:
One way to estimate the amount of new CSI that appeared with the Cambrian animals is to count the number of new cell types that emerged with them (Valentine 1995:91-93) . . . the more complex animals that appeared in the Cambrian (e.g., arthropods) would have required fifty or more cell types . . . New cell types require many new and specialized proteins. New proteins, in turn, require new genetic information. Thus an increase in the number of cell types implies (at a minimum) a considerable increase in the amount of specified genetic information. Molecular biologists have recently estimated that a minimally complex single-celled organism would require between 318 and 562 kilobase pairs of DNA to produce the proteins necessary to maintain life (Koonin 2000). More complex single cells might require upward of a million base pairs. Yet to build the proteins necessary to sustain a complex arthropod such as a trilobite would require orders of magnitude more coding instructions. The genome size of a modern arthropod, the fruitfly Drosophila melanogaster, is approximately 180 million base pairs (Gerhart & Kirschner 1997:121, Adams et al. 2000). Transitions from a single cell to colonies of cells to complex animals represent significant (and, in principle, measurable) increases in CSI . . . . In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types, but also for the origin of new body plans . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes--the very stuff of macroevolution--apparently do not vary. In other words, mutations of the kind that macroevolution doesn't need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don't occur.
Do they have a good answer to this, one backed by vera causa? Nope. No more, than to this from Loennig of Max Planck Institute [a Jehovah's Witness, BTW . . . ], in his 2004 equally peer reviewed presentation, on "Dynamic genomes, morphological stasis, and the origin of irreducible complexity":
. . . examples like the horseshoe crab [~250 MY fossil morphology stasis] are by no means rare exceptions from the rule of gradually evolving life forms . . . In fact, we are literally surrounded by 'living fossils' in the present world of organisms when applying the term more inclusively as "an existing species whose similarity to ancient ancestral species indicates that very few morphological changes have occurred over a long period of geological time" [85] . . . . Now, since all these "old features", morphologically as well as molecularly, are still with us, the basic genetical questions should be addressed in the face of all the dynamic features of ever reshuffling and rearranging, shifting genomes, (a) why are these characters stable at all and (b) how is it possible to derive stable features from any given plant or animal species by mutations in their genomes? . . . . A first hint for answering the questions . . . is perhaps also provided by Charles Darwin himself when he suggested the following sufficiency test for his theory [16]: "If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down." . . . Biochemist Michael J. Behe [5] has refined Darwin's statement by introducing and defining his concept of "irreducibly complex systems", specifying: "By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning" . . . [for example] (1) the cilium, (2) the bacterial flagellum with filament, hook and motor embedded in the membranes and cell wall and (3) the biochemistry of blood clotting in humans . . . . One point is clear: granted that there are indeed many systems and/or correlated subsystems in biology, which have to be classified as irreducibly complex and that such systems are essentially involved in the formation of morphological characters of organisms, this would explain both, the regular abrupt appearance of new forms in the fossil record as well as their constancy over enormous periods of time. For, if "several well-matched, interacting parts that contribute to the basic function" are necessary for biochemical and/or anatomical systems to exist as functioning systems at all (because "the removal of any one of the parts causes the system to effectively cease functioning") such systems have to (1) originate in a non-gradual manner and (2) must remain constant as long as they are reproduced and exist. And this could mean no less than the enormous time periods mentioned for all the living fossils hinted at above. Moreover, an additional phenomenon would also be explained: (3) the equally abrupt disappearance of so many life forms in earth history . . . The reason why irreducibly complex systems would also behave in accord with point (3) is also nearly self-evident: if environmental conditions deteriorate so much for certain life forms (defined and specified by systems and/or subsystems of irreducible complexity), so that their very existence be in question, they could only adapt by integrating further correspondingly specified and useful parts into their overall organization, which prima facie could be an improbable process -- or perish . . . . According to Behe and several other authors [5-7, 21-23, 53-60, 68, 86] the only adequate hypothesis so far known for the origin of irreducibly complex systems is intelligent design (ID) . . . in connection with Dembski's criterion of specified complexity . . . . "For something to exhibit specified complexity therefore means that it matches a conditionally independent pattern (i.e., specification) of low specificational complexity, but where the event corresponding to that pattern has a probability less than the universal probability bound and therefore high probabilistic complexity" [23]. For instance, regarding the origin of the bacterial flagellum, Dembski calculated a probability of 10^-234[22].
Has such evidence of islands of irreducibly complex function been adequately answered? Nope, again. So, why is it we still see the sort of hot disputes that pop up in and around UD, given that the only empirically demonstrated adequate cause of FSCO/I is intelligently directed configuration? And, the needle in haystack blind search challenge readily explains why? That is, per inductive inference to best current explanation FSCO/I is a highly reliable sign of design. A clue, is that objectors then zero in and try to throw up reasons to dismiss or ignore FSCO/I as real, relevant and recognised. Never mind that the concept is readily demonstrated, is directly recognisable per the wiring diagram pattern, is discernible in writings of leading ID thinkers and demonstrably traces to Orgel and Wicken in the 1970's. In fact, it started out as just a handy way to abbreviate a descriptive phrase. We are not dealing with dialogue constrained by mutual respect for one another, for first principles of reason and for objective assessment of evidence and warrant. We face a dirty ideological war, and a case where quite literally the stronger the argument, the more strident and ruthless the objections and obfuscatory talking points. So, we simply will have to lay out our case at more accessible and more technical levels, demonstrating adequate warrant. And stand. And hold our ground in the face of ruthless and too often uncivil behaviour. We are dealing with those who -- quite correctly -- view what we have stood for as a dangerous threat to their ideological schemes and linked sociocultural agendas. Agendas shaped by the sheer amorality and radical relativism of a priori evolutionary materialism. So, there is deep polarisation, there is rage, there is ruthlessness, there is even outright nihilism. But, we must stay the course, as if science is not rescued from such ideologisation, what is left of rationality in our civilisation will collapse. With fatal consequences. If you doubt me on this, simply observe how ever so many of the more determined objectors are perfectly willing to burn down the house of reason to preserve their ideology. A very bad sign indeed. It is kairos . . . time to stand and be counted. Only that will in the end count. And, coming back full circle, at popular level, the discussion pivots on configuration, FSCO/I and blind needle in haystack search. But, that needs to have technical backbone, which will pivot on the three interacting perspectives, facets of a whole. But those who speak to such things need to understand what they are dealing with. And, on thermodynamics specifically, the matter focusses first on the statistical underpinnings of 2LOT, and the need for relevant concurrent flows of energy, mass and info coupled to energy converters and constructors that create FSCO/I rich entities. Where, the informational perspective that entropy is best seen as a metric of average missing info to specify microstate given macrostate defined at observable level, becomes pivotal. For, that is cloely linked to clustering of microstates, relative statistical weight of clusters and spontaneous change tendencies, etc. From this, we can see why FSCO/I takkes on the significance it does. But, we must be realistic: the stronger the argument, the more determined, ruthless and utterly closed inded will be the objections from committed ideologues. And those who,look to their leadership will blindly parrot the talking points that have been drummed into them. One of these, patently, is the substitution of irrelevant flows in the open system compensation argument. Which, is why, again, I highlight Clausius as you cited:
Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.
KF PS: I again note that in L K Nash's classical introduction to statistical thermodynamics, the example of a 500 or 1,000 or so coin system is used as the key introductory example.kairosfocus
April 12, 2015
April
04
Apr
12
12
2015
03:02 AM
3
03
02
AM
PDT
scordova Here is where Sewell (and all other 2nd_law_SM IDers) disagree with you: the conception of statistical mechanics (SM). SM, as its name says, englobes as a tool statistics (and of course probability theory). So in a sense SM englobes also your LLN argument, but not viceversa.
You say: we are talking statistical mechanics and thermodynamics we are talking states that are thermodynamically related, not heads/tails related. Thermodynamic macrostates are defined by Temperature, Pressure, Volume, Internal Energy, Quantity of Particles, etc. not heads/tails of coin configurations. That’s why it’s called the 2nd law of thermodynamics, not Matzke’s law of 500 fair coins.
Here again we disagree. SM can deal with all large systems that have statistical behaviour. E.g. when SM is applied to quantum mechanics or information theory, heat, temperature, pressure, volume... are not the main issue. There they work with Boltzman constant equal 1 (number without physical measure). So in a sense SM has a larger application than mere classical heat thermodynamics. This distinction is the reason I always use the term "2nd_law_SM", not symply the generic "2nd law". I know you don't want to acknowledge that, so you limit yourself to classical heat thermodynamics, but then you are necessarily forced to invent your LLN argument, when it could well be incorporated into our more general 2nd_law_SM ID argument. It is a bad situation that helps evolutionists only. Please reconsider your position and jump in the Sewell camp. You would be welcome.niwrad
April 12, 2015
April
04
Apr
12
12
2015
02:05 AM
2
02
05
AM
PDT
For 2nd_law_SM, systems spontaneously go to probable states.
Where can the readers find this 2nd_law_SM formally stated in physics, chemistry, and engineering texts? If it's your own construction, that's fine, but it would be inappropriate to represent it as an accepted version of the 2nd law. Btw, if we are talking statistical mechanics and thermodynamics we are talking states that are thermodynamically related, not heads/tails related. Thermodynamic macrostates are defined by Temperature, Pressure, Volume, Internal Energy, Quantity of Particles, etc. not heads/tails of coin configurations. That's why it's called the 2nd law of thermodynamics, not Matzke's law of 500 fair coins.scordova
April 12, 2015
April
04
Apr
12
12
2015
12:54 AM
12
12
54
AM
PDT
scordova
Any 2LOT IDist is invited to show how 500 fair coins 100% heads is not a product of chance based on any well-accepted definition of 2LOT.
For 2nd_law_SM, systems spontaneously go to probable states. The 500-head output of a 500-coin flipping system is even the more improbable state of all 2^500 states, so such output cannot be a spontaneous result.niwrad
April 12, 2015
April
04
Apr
12
12
2015
12:17 AM
12
12
17
AM
PDT
Mung minces words: Chance is not a cause, Salvador,
If you came across a table on which was set 500 fair coins and 100% displayed the “heads” side of the coin, how would you, using 2LOT, test “chance” as a hypothesis to explain this particular configuration of coins?scordova
April 11, 2015
April
04
Apr
11
11
2015
07:39 PM
7
07
39
PM
PDT
Salvador:
Any 2LOT IDist is invited to show how 500 fair coins 100% heads is not a product of chance based on any well-accepted definition of 2LOT.
Chance is not a cause, Salvador, and the 2LOT does not cause chance. Anyone but me see the irony in a Young Earth Creationist appealing to the law of large numbers?Mung
April 11, 2015
April
04
Apr
11
11
2015
07:28 PM
7
07
28
PM
PDT
Here are more ways the 2nd law is expressed. http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/node37.html Any 2LOT IDist is invited to show how 500 fair coins 100% heads is not a product of chance based on any well-accepted definition of 2LOT. I showed how LLN can be used.
100% heads is maximally far from the expectation of 50% heads, therefore we can reject chance as an explanation since 100% heads is inconsistent with LLN.
Any 2LOT proponents care to state it in comparably succinct ways from well-accepted definitions of 2LOT? I don't think definitions of 2LOT involving the term "disorder" should count in light of this material from a website at Occidental College by a respected educator in thermodynamics: http://entropysite.oxy.edu/
2."Disorder — A Cracked Crutch for Supporting Entropy Discussions" from the Journal of Chemical Education, Vol. 79, pp. 187-192, February 2002. "Entropy is disorder" is an archaic, misleading definition of entropy dating from the late 19th century before knowledge of molecular behavior, of quantum mechanics and molecular energy levels, or of the Third Law of thermodynamics. It seriously misleads beginning students, partly because "disorder" is a common word, partly because it has no scientific meaning in terms of energy or energy dispersal. Ten examples conclusively demonstrate the inadequacy of "disorder" in general chemistry.
and
April 2014 The 36 Science Textbooks That Have Deleted "disorder" From Their Description of the Nature of Entropy (As advocated in the publications of Dr. Frank L. Lambert, Professor Emeritus, Chemistry, Occidental College.)
scordova
April 11, 2015
April
04
Apr
11
11
2015
05:32 PM
5
05
32
PM
PDT
Salvador:
Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.
Salvador:
I invite 2LOT proponents to use this definition of 2LOT to argue against the chance hypothesis for 500 fair coins 100% heads.
As has been pointed out to you repeatedly, the Clausius formulation is not the only formulation of the 2LOT. And again, as has been pointed out to you repeatedly, you have things exactly backwards. It's all about probabilities, and the probabilities do not prevent the improbable from taking place. As improbable as it may seem, a tornado could indeed pass through a junkyard and leave behind a fully functioning 747. Hell might freeze over first, but "the laws of thermodynamics" do not prevent it.Mung
April 11, 2015
April
04
Apr
11
11
2015
05:15 PM
5
05
15
PM
PDT
A fundamental approach to the theory of the properties of macroscopic matter has three aspects. First, there is the detailed characterization of the atomic states and structure in terms of the formalism of quantum mechanics. Second, there is the application of statistical considerations to these states; this is the subject matter of statistical mechanics. And, third, there is the development of the macroscopic consequences of the statistical theory, constituting the subject matter of thermodynamics. - Herbert B. Callen. Thermodynamics
Mung
April 11, 2015
April
04
Apr
11
11
2015
04:59 PM
4
04
59
PM
PDT
Salvador:
sample average converges in probability towards the expected value
I guess now we need an explanation of sampling theory and of expected value and an explanation of why sample average converges in probability towards the expected value. I am sure you have one that's simple. Wouldn't introducing the concept of a probability distribution be simpler?Mung
April 11, 2015
April
04
Apr
11
11
2015
04:40 PM
4
04
40
PM
PDT
kairosfocus:
[--> this is of course equivalent to the string of yes/no questions required to specify the relevant "wiring diagram" for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002 . . . ]
Instructive as always kf. And it doesn't take a genius to make the connection between information theory and thermodynamics. One doesn't even have to eat crow to see the connection. In fact, the notion has a long and distinguished history. I guess what puzzles me most is that Salvador, with all his multiple degrees, can't make the connection.Mung
April 11, 2015
April
04
Apr
11
11
2015
04:31 PM
4
04
31
PM
PDT
Algorithmically controlled metabolisms (such as realized in life) are low multiplicity constructs as a matter of principle
Let me try to explain in more lay terms. Syntactically and semantically coherent statements in English are an infinitesimally small space of the possible alphabetic sequences. There are multitudinously more ways to write syntactically and semantically nonsense statements than coherent statements, hence we could say: 1. non-sense statements have high multiplicity 2. coherent statements have low multiplicity According to LLN, random processes (like monkeys on a typewriter, figuratively speaking) will result in non-sense statements because non-sense statements have high multiplicity, and random process tend toward high multiplicity targets. Random processes will not result in coherent statements. A biological organism's "software" operates with on a syntax and semantics that it defines for itself. Hence, some call life an example of an "algorithmically controlled metabolism". In fact life implements an instance of the most complex linguistic construct in the universe, a highly complex Quine computer: http://en.wikipedia.org/wiki/Quine_(computing) Self-defined, complex language processing systems and complex quine computers occupy a infinitesimally low multiplicity configuration relative to the space of possible molecular configurations as a matter of principle. LLN says random processes will not then create the first life in this universe. Rather than appealing to LLN and the problems of multicplicity in the Origin of Life, Wells stated the problem in less technical terms in this way:
Even if [Stanley] Miller’s experiment were valid, you’re still light years away from making life. It comes down to this. No matter how many molecules you can produce with early Earth conditions, plausible conditions, you’re still nowhere near producing a living cell, and here’s how I know. If I take a sterile test tube, and I put in it a little bit of fluid with just the right salts, just the right balance of acidity and alkalinity, just the right temperature, the perfect solution for a living cell, and I put in one living cell, this cell is alive – it has everything it needs for life. Now I take a sterile needle, and I poke that cell, and all its stuff leaks out into this test tube. We have in this nice little test tube all the molecules you need for a living cell – not just the pieces of the molecules, but the molecules themselves. And you cannot make a living cell out of them. You can’t put Humpty Dumpty back together again. So what makes you think that a few amino acids dissolved in the ocean are going to give you a living cell? It’s totally unrealistic.
scordova
April 11, 2015
April
04
Apr
11
11
2015
12:17 PM
12
12
17
PM
PDT
Thank you KF for highlighting my comment.
long term maverick ID (and, I think, still YEC-sympathetic) supporter SalC
I was a YEC-sympathetic OEC for years, but as of 2013 I'm now a professing YEC, but that is personal view not a scientific claim. I'm viewed as a maverick IDist because: 1. I don't think ID is science, even though I believe it is true, rather than absolute statements of truth or whether "ID is science", the question of ID is better framed in terms like Pascal's Wager. 2. I don't agree with 2LOT begin used in defense of ID 3. I argue information theory in defense of ID is not as good as arguing basic probability 4. The age of the fossil (really time of death of the fossils) is not strictly speaking a YEC issue, it is not an age-of-the-universe issue, it is a time of death issue. Empirical evidence strongly argues the fossil record is recent, not old. Short time frames favor ID over evolution, and hence evolutionary theory, as stated is likely wrong. The time of death of the fossils is recent and independent of the radiometric ages of the rocks they are buried in. A living dog today could be buried in 65 million year old rocks, it doesn't imply the dog died 65 million years ago after we exhume it. Follow the evidence where it leads, and the evidence says the fossils died more recently than evolutionists claim. C14 traces are ubiquitous in the carboniferous era, and the best explanation is recency in the time of death of fossils. But back to the OP, should ID be defended by: 1. Law of Large Numbers (LLN) 2. Information theory 3. 2nd Law of Thermodynamics (2LOT) I'll argue by way of illustration. If we found 500 fair coins 100% heads, should and IDists argue against the chance hypothesis using: 1. Law of Large numbers (the best approach, imho). Here is the law of large numbers:
sample average converges in probability towards the expected value
The expected value of a system of 500 fair coins in an uncertain configuration (like after flipping and/or shaking) is 50% heads. 100% heads is farthest from expectation, therefore we would rightly reject the chance hypothesis as an explanation. Simple, succinct, unassailable. 2. Information Theory (like using a sledgehammer to swat flies, it will work if the sledgehammer is skillfully used -- yikes!). Does all the fancy math and voluminous dissertations add force to the argument? I'll let the information theory proponents offer their information arguments to reject the chance hypothesis. I won't even try... 3. 2nd Law of Thermodynamics (2LOT) Let me state the most widely accepted version of 2LOT as would be found in textbooks of physics, chemistry and engineering students: CLAUSIUS STATEMENT
Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.
I invite 2LOT proponents to use this definition of 2LOT to argue against the chance hypothesis for 500 fair coins 100% heads. If one wouldn't used Information Theory or 2LOT for such a trivial "design" of 500 fair coins 100% heads, why then should such arguments be appropriate for substantially more complex designs? Behe's Edge of Evoluion was a basic a probability argument. It works. It gets the point across. LLN seems to me a superior approach. The Humpty Dumpty argument is a subtle instance of the LLN arugment, it is the way I would defend the idea "life does not proceed from non-life" (to extrapolate Pastuer's "Life comes from life" statement). The heart of the issues is which is the better approach for IDists: 1. LLN 2. Information Theory 3. 2LOT I have argued LLN, imho, is the best on many levels.scordova
April 11, 2015
April
04
Apr
11
11
2015
11:40 AM
11
11
40
AM
PDT
Sev: Now for: >> let us accept, for the sake of argument, that the emergence of life is so improbable>> 1 --> vastly improbable by blind chance and mechanical necessity. To which, you have no valid counter. >> that the only reasonable explanation is the intervention of an intelligent agent.>> 2 --> The only empirically observed, analytically plausible source of FSCO/I is design. 3 --> Where the statistics underpinning 2LOT is pivotal to seeing why such is so. >>To create life>> 4 --> A distinct task, separate from origin of a cosmos fitted for life >> – and possibly a universe in which it can survive –>> 5 --> This is prior and the evidence of fine tuning that sets up a habitat for cell based life is decisive. >> such an agent must contain the necessary amount of knowledge or information>> 6 --> You conflate knowledge, a function of rationally contemplative mind, with data storage and information processing, which can be blindly mechanical and would be GIGO-limited. 7 --> Knowledge, is well warranted, credibly true and/or reliable belief, so it requires rationally contemplative subjects. 8 --> To try to get to such from data and signals processed by some blindly mechanical computational substrate is to try to get North by insistently heading West. 9 --> Absurdly at cross purposes, as say Reppert highlighted:
. . . let us suppose that brain state A, which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [[But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [[so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.
>> which, from our perspective, would make it nearly if not actually omniscient.>> 10 --> A cosmological designer would be highly knowledgeable and massively powerful, which would include the possibility of omniscience and omnipotence. Join such with ontological, cosmological and moral reasoning and you are getting somewhere: an inherently good creator God, a necessary [thus eternal] and maximally great being, who is worthy of being served by doing the good. 11 --> But the designer of cell based life does not require any such constraint. Within 100 years, I am confident we will be able to do it. >>Such a being must, therefore, itself be hugely complex>> 12 --> Nope, we have no basis for inferring that mind is composite and complicated, as opposed to computational substrates. >>and, hence, hugely improbable.>> 13 --> attempted reductio, fails because of strawman creation. >> Which prompts the perfectly reasonable question of whence came the Designer? The only rational explanation for such a hugely improbable entity is another Designer – and so on. The only way to halt an infinite regress is to posit an uncaused first cause (UFC).>> 14 --> Further strawmen. By projecting implicit substitutions of computational substrates etc and conflating information processing with reasoning, you end with a sophomoric caricature of theism that has been recently popularised by Dawkins. 15 --> Instead, the issue is modes of being and causal adequacy (with the reality of our being under moral government involved). As I pointed out in the OP for a recent thread that you did not try to raise such an argument on:
Let me do a basic outline of key points: 1: A world, patently exists. 2: Nothing, denotes just that, non-being. 3: A genuine nothing, can have no causal capacity. 4: If ever there were an utter nothing, that is exactly what would forever obtain. 5: But, per 1, we and a world exist, so there was always something. 6: This raises the issue of modes of being, first possible vs impossible. 7: A possible being would exist if a relevant state of affairs were realised, e.g. heat + fuel + oxidiser + chain rxn –> fire (a causal process, showing fire to depend on external enabling factors) 8: An impossible being such as a square circle has contradictory core characteristics and cannot be in any possible world. (Worlds being patently possible as one is actual.) 9: Of possible beings, we see contingent ones, e.g. fires. This also highlights that if something begins, there are circumstances under which it may not be, and so, it is contingent and is caused as the fire illustrates. 10: Our observed cosmos had a beginning and is caused. This implies a deeper root of being, as necessarily, something always was. 11: Another possible mode of being is a necessary being. To see such, consider a candidate being that has no dependence on external, on/off enabling factors. 12: Such (if actual) has no beginning and cannot end, it is either impossible or actual and would exist in any possible world. For instance, a square circle is impossible, One and the same object cannot be circular and square in the same sense and place at the same time . . . but there is no possible world in which twoness does not exist. 13: To see such, begin with the set that collects nothing and proceed: { } –> 0 {0} –> 1 {0, 1} –> 2 Etc. 14: We thus see on analysis of being, that we have possible vs impossible and of possible beings, contingent vs necessary. 15: Also, that of serious candidate necessary beings, they will either be impossible or actual in any possible world. That’s the only way they can be, they have to be in the [world-]substructure in some way so that once a world can exist they are there necessarily. 16: Something like a flying spaghetti monster or the like, is contingent [here, not least as composed of parts and materials], and is not a serious candidate. (Cf also the discussions in the linked thread for other parodies and why they fail.) 17: By contrast, God is a serious candidate necessary being, The Eternal Root of being. Where, a necessary being root of reality is the best class of candidates to always have been. 18: The choice, as discussed in the already linked, is between God as impossible or as actual. Where, there is no good reason to see God as impossible, or not a serious candidate to be a necessary being, or to be contingent, etc. 19: So, to deny God is to imply and to need to shoulder the burden of showing God impossible. [U/D April 4, 2015: We can for illustrative instance cf. a form of Godel's argument, demonstrated to be valid: . . . ] 20: Moreover, we find ourselves under moral government, to be under OUGHT. 21: This, post the valid part of Hume’s guillotine argument (on pain of the absurdity of ultimate amorality and might/manipulation makes ‘right’) implies that there is a world foundational IS that properly bears the weight of OUGHT. 22: Across many centuries of debates, there is only one serious candidate: the inherently good, eternal creator God, a necessary and maximally great being worthy of loyalty, respect, service through doing the good and even worship. 23: Where in this course of argument, no recourse has been had to specifically religious experiences or testimony of same, or to religious traditions; we here have what has been called the God of the philosophers, with more than adequate reason to accept his reality such that it is not delusional or immature to be a theist or to adhere to ethical theism. 24: Where, ironically, we here see exposed, precisely the emotional appeal and hostility of too many who reject and dismiss the reality of God (and of our being under moral government) without adequate reason. So, it would seem the shoe is rather on the other foot.
16 --> So, theism is not at all like the caricature you would set up and knock over. >>The problem is that a UFC smacks too much of being just a ploy to get you out of an unpalatable alternative.>> 17 --> Loaded language to set up and knock over yet another strawman. 18 --> The issue is, why is there something -- a world, with us as rational and morally governed beings -- in it; rather than nothing. That brings up ground of being, and all sorts of issues, which are admittedly difficult and indeed abstruse, but -- as Mathematics and Physics show -- that is not the same as absurd. 19 --> On considering such, God as a necessary being ground of being is a reasonable alternative, and one that will by the logic of such modes of being, will be either impossible or actual. 20 --> Strawmannish distortions driven by failing to understand on its own terms, or rhetorical parodies rooted in such misunderstandings don't work to shift that off the table. Of course, this is a side note for something important in its own right but not particularly relevant to the focus of this thread's topic. I suggest going to the other thread if one wishes to debate this. KFkairosfocus
April 11, 2015
April
04
Apr
11
11
2015
09:13 AM
9
09
13
AM
PDT
Sev Let me take up:
The problem is that your alternative of Intelligent Design offers no better explanatory purchase. First, it answers a different question to the one you are asking of science. It is a suggestion of who might have done it, not how it was done, which is what you are demanding science tells us.
The first of these is a bare ideological dismissal. FSCO/I is real, is relevant to life and is ROUTINELY and ONLY seen to have just one adequate cause, design. Intelligently directed configuration. Design would naturally implicate appropriate energy and mass flows, prescriptive information, coupling to energy converters and constructors, as well as exhaust of degraded energy and wastes. Where, we already see from Venter et al precursor technologies, and something like the manipulation techniques down to atoms already seen in say the tunnelling microscope shots with IBM spelled out with atoms etc show there is no roadblock to precise molecular scale manipulation. Thermodynamics is satisfied, information, mass and energy flow requisites are satisfied, coupling and relevant compensation are satisfied. All we need to do it ourselves is several generations of progress. And yes, I openly expect relevant nanotech to be in place across this century, towards solar system colonisation. Next, where do you come from with a who might have done it? Have you so disregarded open statements, repeatedly on record since Thaxton et al, that inference to design as process relevant to cell based life does not involve being able to infer that a relevant designer is within or beyond the cosmos? Where, the phenomenon in hand, is cell based life on earth? Where, we are observing it? I have repeatedly stated that on the phenomenon of earth based cellular life, a molecular nanotech lab some generations beyond Venter would do. Indeed, I strongly believe, across this century, will do. Though, I think nanotech replicators to do industrial revo 3.0 are more what will be in mind: nanotech fabricators with self replication. Have you fallen into believing your own propaganda? Other than that, I find it incredible to see declarations like that. If you want to see where I do think evidence points beyond the cosmos, shift gears to cosmological fine tuning joined to and interacting with ontological, cosmological and moral government considerations. Where, most of that goes into another intellectual realm completely, straight philosophy. And where such scientific aspects as obtain stand distinct from origin of cell based life and its diversification. If I did not have strong reason to think FSCO/I as a SCIENTIFIC matter is a strong sign of design, I would still be a theist. And holding that FSCO/I in life based on cells points to design is not a basis for that theism. Period. If, that is what is clouding your mind. How twerdun is not a problem for design. We know design is adequate to FSCO/I and we know technologies are in hand that point to doing it at molecular nanotech level. Indeed IIRC people have shown D/RNA manipulation to store info. Venter IIRC. Vera causa -- demonstrated causal adequacy, is passed for intelligently directed configuration. But, if you are looking to blind chance and/or mechanical necessity creating FSCO/I, that is an utterly different matter. It has never been shown to be causally adequate, and BIG claims are on the table. In addition, such claims are inherently mechanistic and so need to demonstrate causal adequacy. All I am seeing above is a back-handed way of admitting that vera causa has not been passed, multiplied by an attempt to suggest that intelligently directed configuration cannot claim vera causa. Thanks for the admission, if back-handed is all you can give, we will take it. And, design routinely causes FSCO/I with evidence in hand that it can do so with relevant molecular nanotechs. Right now, I'd say the smart money is on design. KFkairosfocus
April 11, 2015
April
04
Apr
11
11
2015
08:37 AM
8
08
37
AM
PDT
Sev, nope . . . label pejoratively [and dismissively smear a Nobel-equivalent prize-holder expert in thermodynamics into the bargain] does not work. First something as simple as a fishing reel or a D'Arsonval galvanometer is well beyond the threshold of 500 - 1,000 bits description length for config, and arguably a nut and bold mechanism too. Where, the threshold of sufficient complexity that fluctuations do not explain is well, well short of what is needed for:
a: encapsulated [thus shielded from cross reactions], b: intelligently gated [to pass in valid nutrients and out wastes] c: metabolic [to couple energy and mass flows with prescriptive info, joined to energy converters and constructors to configure requisite components and harvest required energy] d: von Neumann code-using kinematic self replicator using [to self replicate] e: C-Chemistry aqueous medium [to have relevant underlying chemistry] f: protein using [cellular workhorse family of molecules] g: d/rna using [for information stores and more] h: relevantly homochiral [for geometric handedness requisite for key-lock fitting etc]
. . . cell based life. Consequently a whole industry has sprung up to propose a step by step bridge to precursors in warm ponds or whatever other environment du jour is currently favoured. Basic problem? No viable answer that is backed up by empirically demonstrated adequacy, not involving huge investigator manipulation, has been found. Nor is any such in prospect. So claims and just so stories about replicators, RNA worlds and the like fall to the ground. Never mind, your suggestions like:
If you want a very simple a/mat explanation of how complex machines could be created by natural processes, one possibility is you start with basic chemicals which assemble more complex molecules, which form self-replicating structures, which over vast spans of time give rise to creatures such as ourselves who are able to design and build Boeing 747s amongst other things. You know this hypothesis as well as we do and we are both aware of the difficulties in finding evidence that it did or even could happen.
Until you can provide solid answers including RELEVANT compensating flows, relying on statistical miracles and just so stories backed by ideological imposition of materialism are not good enough. Those original first step replicators, those intermediate steps, those forst living cells or reasonable models need to be shown on empirical observation not ideological speculation and gross extrapolations. That is the root node challenge in the UD pro darwinism essay challenge that has been on the table for, what two and a half years, unanswered. As well you know from trying to push it aside and getting back to knocking at intelligently designed configuration. Where, that -- aka design -- is in fact the only empirically warranted adequate cause for FSCO/I to the point where we are fully warranted to infer per best current explanation from FSCO/I to design as credible cause. Never mind your dismissive one liner. But, let's spot you the bigger "half" of the problem, getting to a viable cell. Onward, much the same problem still obtains as FSCO/I is needed in copious quantities; needed in many distinct sub systems to get viable body plans for complex life forms. Maybe 4 dozen new cell types plus embryological assembly programs or the equivalent to actually build body plans. Genomes some 10 - 100+ mn bases each, on reasonable estimates and observations. Where, FSCO/I because configs have that just-so specificity indicated by the wiring diagram pattern (up to some tolerance), comes in islands deeply isolated in config spaces. The search resources to cross the intervening seas of non-function, just are not there. Body plan level macroevo runs into much the same difficulty. So, for OOL there is a threshold required for viable life that poses a conundrum. And empirical demonstration of causal adequacy is persistently missing. For body plans, the complexity threshold for viability poses much the same problem. So, all at once or step by step, Hoyle had a point. And the empirical grounding required for evolutionary materialist models of OOL or OO body plans [OOBP, and done] simply is not there. So, yet again, with due allowance for rhetorical flourishes on tornadoes and junkyards, sir Fred had a point. KFkairosfocus
April 11, 2015
April
04
Apr
11
11
2015
07:28 AM
7
07
28
AM
PDT
The quote from Sewell reads like an extended statement of Hoyle's Fallacy which, as we know, is a specific case of the strawman fallacy. I ckallenge you to find anyone - Darwinist, evomat, a/mat, whatever - who has argued that complex machines can spring into existence fully-formed ex nihilo. The only person who even suggested that a complete Boeing 747 could be whisked up by a tornado from parts in a junkyard was Hoyle. Did he really think he was the only one to have spotted this problem? If you want a very simple a/mat explanation of how complex machines could be created by natural processes, one possibility is you start with basic chemicals which assemble more complex molecules, which form self-replicating structures, which over vast spans of time give rise to creatures such as ourselves who are able to design and build Boeing 747s amongst other things. You know this hypothesis as well as we do and we are both aware of the difficulties in finding evidence that it did or even could happen. The problem is that your alternative of Intelligent Design offers no better explanatory purchase. First, it answers a different question to the one you are asking of science. It is a suggestion of who might have done it, not how it was done, which is what you are demanding science tells us. Second, let us accept, for the sake of argument, that the emergence of life is so improbable that the only reasonable explanation is the intervention of an intelligent agent. To create life - and possibly a universe in which it can survive - such an agent must contain the necessary amount of knowledge or information which, from our perspective, would make it nearly if not actually omniscient. Such a being must, therefore, itself be hugely complex and, hence, hugely improbable. Which prompts the perfectly reasonable question of whence came the Designer? The only rational explanation for such a hugely improbable entity is another Designer - and so on. The only way to halt an infinite regress is to posit an uncaused first cause (UFC). The problem is that a UFC smacks too much of being just a ploy to get you out of an unpalatable alternative. Declare a UFC by fiat not because there is any good reason to think such a thing exists. Besides, a UFC must itself be infinite to escape having a beginning so it doesn't really get you out of the quandary of having to choose between two equally unsatisfactory alternatives.Seversky
April 11, 2015
April
04
Apr
11
11
2015
05:31 AM
5
05
31
AM
PDT
You are under arrest for bringing the Emperor into disrepute? -- cf added pickairosfocus
April 11, 2015
April
04
Apr
11
11
2015
04:43 AM
4
04
43
AM
PDT
Niw & WJM: I think it is time to stand; but then, that is literally written into my name and blood, backed by 1,000 years of history. Tends to give a nose for kairos. Why not clip Wiki, counselling us aright against its known general leanings:
In rhetoric kairos is "a passing instant when an opening appears which must be driven through with force if success is to be achieved."[1] Kairos was central to the Sophists, who stressed the rhetor's ability to adapt to and take advantage of changing, contingent circumstances. In Panathenaicus, Isocrates writes that educated people are those “who manage well the circumstances which they encounter day by day, and who possess a judgment which is accurate in meeting occasions as they arise and rarely misses the expedient course of action". Kairos is also very important in Aristotle's scheme of rhetoric. Kairos is, for Aristotle, the time and space context in which the proof will be delivered. Kairos stands alongside other contextual elements of rhetoric: The Audience, which is the psychological and emotional makeup of those who will receive the proof; and To Prepon, which is the style with which the orator clothes the proof.
(My favourite case lies in Ac 17, where Paul was literally laughed out of court in Athens. But at length, the few, the despised and dismissed, the ridiculed prevailed. Because, of the sheer raw power of their case never mind who dominate and manipulate at the moment.) Now, yes, it is a bold stance to say that as facets of a Jewel cannot be separated without destroying its function, the LLN-probability/fluctuations, statistical thermodynamics, informational view and information facets of the design case are inextricably, irreducibly interactive and mutually reinforcing. With, the statistical underpinnings of 2LOT being prominent. It is worth clipping SalC's own recent cite on that law from Clausius, at 570 in the previous thread:
Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time
He of course emphasises the reference to heat; I have highlighted the reference to relevance of the concurrent energy process. I also, again, refer to the fact that for 100+ years, 2LOT has been inextricably linked to statistical underpinnings that go well beyond just heat flow. I again cite a favourite basic College level Physics text, as summarised in my online note App I:
Yavorski and Pinski, in the textbook Physics, Vol I [MIR, USSR, 1974, pp. 279 ff.], summarise the key implication of the macro-state and micro-state view well: as we consider a simple model of diffusion, let us think of ten white and ten black balls in two rows in a container. There is of course but one way in which there are ten whites in the top row; the balls of any one colour being for our purposes identical. But on shuffling, there are 63,504 ways to arrange five each of black and white balls in the two rows, and 6-4 distributions may occur in two ways, each with 44,100 alternatives. So, if we for the moment see the set of balls as circulating among the various different possible arrangements at random, and spending about the same time in each possible state on average, the time the system spends in any given state will be proportionate to the relative number of ways that state may be achieved. Immediately, we see that the system will gravitate towards the cluster of more evenly distributed states. In short, we have just seen that there is a natural trend of change at random, towards the more thermodynamically probable macrostates, i.e the ones with higher statistical weights. So "[b]y comparing the [thermodynamic] probabilities of two states of a thermodynamic system, we can establish at once the direction of the process that is [spontaneously] feasible in the given system. It will correspond to a transition from a less probable to a more probable state." [p. 284.] This is in effect the statistical form of the 2nd law of thermodynamics. Thus, too, the behaviour of the Clausius isolated system above [with interacting sub-systemd A and B that transfer d'Q to B due to temp. difference] is readily understood: importing d'Q of random molecular energy so far increases the number of ways energy can be distributed at micro-scale in B, that the resulting rise in B's entropy swamps the fall in A's entropy. Moreover, given that [FSCO/I]-rich micro-arrangements are relatively rare in the set of possible arrangements, we can also see why it is hard to account for the origin of such states by spontaneous processes in the scope of the observable universe. (Of course, since it is as a rule very inconvenient to work in terms of statistical weights of macrostates [i.e W], we instead move to entropy, through s = k ln W. Part of how this is done can be seen by imagining a system in which there are W ways accessible, and imagining a partition into parts 1 and 2. W = W1*W2, as for each arrangement in 1 all accessible arrangements in 2 are possible and vice versa, but it is far more convenient to have an additive measure, i.e we need to go to logs. The constant of proportionality, k, is the famous Boltzmann constant and is in effect the universal gas constant, R, on a per molecule basis, i.e we divide R by the Avogadro Number, NA, to get: k = R/NA. The two approaches to entropy, by Clausius, and Boltzmann, of course, correspond. In real-world systems of any significant scale, the relative statistical weights are usually so disproportionate, that the classical observation that entropy naturally tends to increase, is readily apparent.)
This underlying context is easily understood and leads logically to 2LOT as an overwhelmingly likely consequence. Beyond a reasonable scale, fluctuations beyond a very narrow range are statistical miracles, that we have no right to expect to observe. And, that then refocusses the issue of connected, concurrent energy flows to provide compensation for local entropy reductions. As I point out in the OP, where something like origin of FSCO/I is concerned, the only actually observed pattern is that:
. . . the reasonable — and empirically warranted — expectation, is a: to find energy, mass and information sources and flows associated with b: energy converters that provide shaft work or controlled flows [I use a heat engine here but energy converters are more general than that], linked to A heat Engine partially converts heat into workc: constructors that carry out the particular work, under control of d: relevant prescriptive information that explicitly or implicitly regulates assembly to match the wiring diagram requisites of function, also with e: exhaust or dissipation otherwise of degraded energy [typically, but not only, as heat . . . ] and discarding of wastes. (Which last gives relevant compensation where dS cosmos rises.)
In short, appeal to irrelevant "compensation" is questionable. That is the context in which we can see the force of your remark, WJM, that:
Often the terminology used by those we debate against itself reveals what is hidden underneath, like when our opponents use the term “is consistent with” or demand we answer “are you saying the 2LoT has been violated?” What is being hidden is that they are preventing a reasonable discussion of the plausibility, as opposed to the bare possibility, that certain arrangements of matter can be in good faith be expected to spontaneously occur given any amount of time and given the observed probabilistic/entropic habits of matter/energy/information. Saying that X arrangement “could have” occurred and still be “consistent with” 2LoT (or some other probabilistic principle) is essentially avoiding the argument. Saying that “highly improbable things happen all the time” is avoiding the argument.
The studious avoiding of relevance and of the actually observed pattern that occurs, speaks tellingly. It is in abstract logically and physically possible for arbitrarily large fluctuations to occur. But, of course. Such, by virtue of applicable statistics are also vanishingly unlikely. Just as much the case, but suspiciously absent from serious discussion, and mocked, nit-picked or dismissed when the likes of a Sewell says:
. . . The second law is all about probability, it uses probability at the microscopic level to predict macroscopic change: the reason carbon distributes itself more and more uniformly in an insulated solid is, that is what the laws of probability predict when diffusion alone is operative. The reason natural forces may turn a spaceship, or a TV set, or a computer into a pile of rubble but not vice-versa is also probability: of all the possible arrangements atoms could take, only a very small percentage could fly to the moon and back, or receive pictures and sound from the other side of the Earth, or add, subtract, multiply and divide real numbers with high accuracy. The second law of thermodynamics is the reason that computers will degenerate into scrap metal over time, and, in the absence of intelligence, the reverse process will not occur; and it is also the reason that animals, when they die, decay into simple organic and inorganic compounds, and, in the absence of intelligence, the reverse process will not occur. The discovery that life on Earth developed through evolutionary “steps,” coupled with the observation that mutations and natural selection — like other natural forces — can cause (minor) change, is widely accepted in the scientific world as proof that natural selection — alone among all natural forces — can create order out of disorder, and even design human brains, with human consciousness. Only the layman seems to see the problem with this logic. In a recent Mathematical Intelligencer article ["A Mathematician's View of Evolution," The Mathematical Intelligencer 22, number 4, 5-7, 2000] I asserted that the idea that the four fundamental forces of physics alone could rearrange the fundamental particles of Nature into spaceships, nuclear power plants, and computers, connected to laser printers, CRTs, keyboards and the Internet, appears to violate the second law of thermodynamics in a spectacular way.1 . . . . What happens in a[n isolated] system depends on the initial conditions; what happens in an open system depends on the boundary conditions as well. As I wrote in “Can ANYTHING Happen in an Open System?”, “order can increase in an open system, not because the laws of probability are suspended when the door is open, but simply because order may walk in through the door…. If we found evidence that DNA, auto parts, computer chips, and books entered through the Earth’s atmosphere at some time in the past, then perhaps the appearance of humans, cars, computers, and encyclopedias on a previously barren planet could be explained without postulating a violation of the second law here . . . But if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here.” Evolution is a movie running backward, that is what makes it special. THE EVOLUTIONIST, therefore, cannot avoid the question of probability by saying that anything can happen in an open system, he is finally forced to argue that it only seems extremely improbable, but really isn’t, that atoms would rearrange themselves into spaceships and computers and TV sets . . . [NB: Emphases added. I have also substituted in isolated system terminology as GS uses a different terminology.]
That it seems that evolutionary materialism is wedded to such statistical miracles and is quite willing to exploit institutional clout to silence "but the emperor is naked" are not healthy signs. It is time to stand. KFkairosfocus
April 11, 2015
April
04
Apr
11
11
2015
04:15 AM
4
04
15
AM
PDT
Thanks kairosfocus for your bold article. I agree, the ID arguments are countless. Some IDers like more one, while other IDers like another. Anyway, there is room for all, and yes, as you say, they are "facets that inextricably interact as parts of a whole".niwrad
April 10, 2015
April
04
Apr
10
10
2015
11:38 AM
11
11
38
AM
PDT
F/N: In glancing at GS' just released essay collection, I note this:
In a June 15, 2012 post at www.evolutionnews.org , Max Planck Institute biologist W.E. L¨onnig said “Nor-mally the better your arguments are, the more people open their minds to your theory, but with ID, the better your arguments are, the more they close their minds, and the angrier they become. This is science upside down.”
This seems to be a good slice of the problem we face. KFkairosfocus
April 10, 2015
April
04
Apr
10
10
2015
06:17 AM
6
06
17
AM
PDT
Often the terminology used by those we debate against itself reveals what is hidden underneath, like when our opponents use the term "is consistent with" or demand we answer "are you saying the 2LoT has been violated?" What is being hidden is that they are preventing a reasonable discussion of the plausibility, as opposed to the bare possibility, that certain arrangements of matter can be in good faith be expected to spontaneously occur given any amount of time and given the observed probabilistic/entropic habits of matter/energy/information. Saying that X arrangement "could have" occurred and still be "consistent with" 2LoT (or some other probabilistic principle) is essentially avoiding the argument. Saying that "highly improbable things happen all the time" is avoiding the argument.William J Murray
April 10, 2015
April
04
Apr
10
10
2015
05:41 AM
5
05
41
AM
PDT
F/N: For the record, on why I will continue to address probability/expectation/fluctuation issues, information (and particularly FSCO/I) and linked thermodynamics issues tied to the statistical underpinnings of 2LOT. KF PS: For fellow anglers, I found a vid on how Penn International reels are made, and use it to illustrate the reality and requisites of FSCO/I.kairosfocus
April 10, 2015
April
04
Apr
10
10
2015
05:40 AM
5
05
40
AM
PDT
1 2 3

Leave a Reply