Last time, we looked at how Kolmogorov Complexity can be used to quantify the information in functionally specific complex organisation, by using the formal idea of a 3-D universal printer and constructor, 3-DP/C:
. . . it is but a short step to imagine a universal constructor device which, fed a compact description in a suitable language, will construct and present the [obviously, finite] object. Let us call this the universal 3-D printer/constructor, 3-DP/C.
Thus, in principle, reduction of an organised entity to a description in a suitably compact language is formally equivalent in information terms to the object, once 3-DP/C is present as a conceptual entity. So, WLOG, reduction to compact description in a compact language d(E) is readily seen as identifying the information content of any given entity E.
For, d(E) is a program though it can simply be a functional organisational specification, as, causally in this logic-model world:
d(E) + 3-DP/C + n ==> E1, E2, . . . En.
Obviously, n is an auxiliary instruction setting the number of copies to be made . . . .
We thus have a formal framework to reduce any entity to a description d(E), which is informational and has as metric
I = length[d(E)],
where a chain of Y/N q’s will yield I in bits, on the Kolmogorov assumption of compactness. I use compact, to imply that we can get a good enough estimator of I by using something compact. We do not have to actually build a most compact language.
This can also be used to explore the idea of fine tuning, e.g. let us use Barnes; chart:
Now, let us start at X, conceived as a summary of the cosmology of our observed universe, as d(E) fed into the 3-DP/C, with E here being say a simulation of the cosmos and its history:
d(E) + 3-DP/C + n ==> E1, E2, . . . En. n here would be a population of runs assuming a random element.
Now, instead, feed d(E) into a noisy channel so we begin a random walk in the space of cosmologies,
d(E) –> lossy, noisy medium –> d*(E) + 3-DP/C + 1 ==> E*1
d*(E) –> LNM –> d**(E) + 3-DP/C + 1 ==> E**1
etc.
Here, we can readily see how we can construct a map of possible outcomes, much as Barnes did and illustrates. Though of course one can also explore border zones algebraically etc.
The obvious result is that we see how our observed cosmos sits at a fine tuned operating point for a cosmos that is viable for life. (This also extends to exploring islands of function in configuration spaces in general.)
We see here how islands of function can have fitness landscapes allowing local hill climbing, but of course the issue of loss of function and locking into a peak arise:
So, now, we can use the 3-DP/C formalism to draw out what is involved in the idea of fine tuning, including of course, how intensely informational such a pattern is.
John Leslie is thought provoking:
“One striking thing about the fine tuning is that a force strength or a particle mass often appears to require accurate tuning for several reasons at once. Look at electromagnetism. Electromagnetism seems to require tuning for there to be any clear-cut distinction between matter and radiation; for stars to burn neither too fast nor too slowly for life’s requirements; for protons to be stable; for complex chemistry to be possible; for chemical changes not to be extremely sluggish; and for carbon synthesis inside stars (carbon being quite probably crucial to life). Universes all obeying the same fundamental laws could still differ in the strengths of their physical forces, as was explained earlier, and random variations in electromagnetism from universe to universe might then ensure that it took on any particular strength sooner or later. Yet how could they possibly account for the fact that the same one strength satisfied many potentially conflicting requirements, each of them a requirement for impressively accurate tuning?” [Our Place in the Cosmos, The Royal Institute of Philosophy, 1998 (courtesy Wayback Machine) Emphases added.]
AND:
“. . . the need for such explanations does not depend on any estimate of how many universes would be observer-permitting, out of the entire field of possible universes. Claiming that our universe is ‘fine tuned for observers’, we base our claim on how life’s evolution would apparently have been rendered utterly impossible by comparatively minor alterations in physical force strengths, elementary particle masses and so forth. There is no need for us to ask whether very great alterations in these affairs would have rendered it fully possible once more, let alone whether physical worlds conforming to very different laws could have been observer-permitting without being in any way fine tuned. Here it can be useful to think of a fly on a wall, surrounded by an empty region. A bullet hits the fly Two explanations suggest themselves. Perhaps many bullets are hitting the wall or perhaps a marksman fired the bullet. There is no need to ask whether distant areas of the wall, or other quite different walls, are covered with flies so that more or less any bullet striking there would have hit one. The important point is that the local area contains just the one fly.” [Emphasis his.]
This fly on the wall metaphor has been famous, and aptly captures the issue of locality of fine tuning.
Where, too, we see that fine tuning leading to islands of function is a broad phenomenon, the bits and pieces of a complex system need to fit and work together for the whole to work.
This of course, brings us full circle to Paley’s famous watch.
Paley, in his time, could describe the intricate nature of contrivance leading to an artifact, a system well adapted to the purpose of time keeping. But, he had not the means to quantify the information involved, that would have to wait for over a century until we first found the idea of surprise and reduction of uncertainty leading to negative log probability metrics and informational entropy. Where, too, Jaynes et al were able to follow Szilard et al and draw a connexion between informational and thermodynamic entropy. In effect, the entropy of a macro observable entity is the average wanting information to specify microstate, given a description on macro observable state.
Then came Kolmogorov, and we can therefore use the formalism of a 3-DP/C to understand information content, functionality based on information implicit in organisation, and islands of fine tuned function amidst seas of non function, thus blind search challenge.
Paley, in his Ch 2, had a further contribution that has been even more underestimated. He saw that the additionality of self-replication vastly increased the complex functionality to be explained. This means that origin of life is even more complex than many acknowledge, and that origin of sustainable, novel body plans is even more challenging.
Coming back to focus, fine tuning at cosmological scale, the Nobel equivalent prize holder, Sir Fred Hoyle, has some choice words:
[Sir Fred Hoyle, In a talk at Caltech c 1981 (nb. this longstanding UD post):] From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16.] . . .
also, in the same talk at Caltech:
The big problem in biology, as I see it, is to understand the origin of the information carried by the explicit structures of biomolecules. The issue isn’t so much the rather crude fact that a protein consists of a chain of amino acids linked together in a certain way, but that the explicit ordering of the amino acids endows the chain with remarkable properties, which other orderings wouldn’t give. The case of the enzymes is well known . . . If amino acids were linked at random, there would be a vast number of arrangements that would be useless in serving the puposes of a living cell. When you consider that a typical enzyme has a chain of perhaps 200 links and that there are 20 possibilities for each link,it’s easy to see that the number of useless arrangements is enormous, more than the number of atoms in all the galaxies visible in the largest telescopes. [ –> 20^200 = 1.6 * 10^260] This is for one enzyme, and there are upwards of 2000 of them, mainly serving very different purposes. So how did the situation get to where we find it to be? This is, as I see it, the biological problem – the information problem . . . . I was constantly plagued by the thought that the number of ways in which even a single enzyme could be wrongly constructed was greater than the number of all the atoms in the universe. So try as I would, I couldn’t convince myself that even the whole universe would be sufficient to find life by random processes – by what are called the blind forces of nature . . . . By far the simplest way to arrive at the correct sequences of amino acids in the enzymes would be by thought, not by random processes . . . . Now imagine yourself as a superintellect working through possibilities in polymer chemistry. Would you not be astonished that polymers based on the carbon atom turned out in your calculations to have the remarkable properties of the enzymes and other biomolecules? Would you not be bowled over in surprise to find that a living cell was a feasible construct? Would you not say to yourself, in whatever language supercalculating intellects use: Some supercalculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule. Of course you would, and if you were a sensible superintellect you would conclude that the carbon atom is a fix.
. . . and again:
I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the [–> nuclear synthesis] consequences they produce within stars. [“The Universe: Past and Present Reflections.” Engineering and Science, November, 1981. pp. 8–12]>>
Food, for thought. END