Uncommon Descent Serving The Intelligent Design Community

ID Foundations, 13: Of bird necks and beaks, robots, micro-level black swan events, inductive turkeys & the design inference, the vNSR and OOL (with hints on economic transformation)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Over the past few days, I have been reflecting a bit on carrying design theory-relevant thought onwards to issues tied to education and economic transformation.

In so doing, I found myself looking at a micro-level, personal black swan event, as I watched student robots picking and placing little plastic widgets much like . . . like . . . a chicken, or a duck.

Or, a swan.

Wait a minute: a swan’s long neck, beak and head form . . . a robot arm manipulator (with built-in sensor turret) on a turtle robot body capable of walking, swimming and flying:

A bird as a turtle bot with a bird-beak manipulator-arm

(Pardon the Canada Goose stand-in for a swan.)

So, maybe, birds do have “arms” after all.  They didn’t just sacrifice forelimbs as arms to get wings, they used their heads as arms.

Thinking back, h’mm: how do birds build nests again?

SHOULD have been obvious!

But, it wasn’t; there was an “aha!” moment. And, looking back, we could see the signs that were there all along but missed.

Wiki’s article on Black Swan theory gives a useful context for that, when it clips Taleb in the NYT:

What we call here a Black Swan (and capitalize it) is an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.

I stop and summarize the triplet: rarity, extreme impact, and retrospective (though not prospective) predictability. A small number of Black Swans explains almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives.

In short, we can come to a threshold where we are surprised by something we should have spotted but didn’t, and the experience can be transformative for good or ill:

There was a certain turkey who showed up for breakfast 9 am sharp every morning, and was fed well. An inductivist, he inferred an exception-less law of nature that explained all his observations most satisfactorily, and with abundant and enjoyable predictive power.He had great confidence in his empirically reliable, well-tested theory.

No need to listen to that silly old “fundy” “supernaturalist” Owl who was saying there was more to the story than meets the eye!

Then, one fine day, the date was December 24th . . .

This set me to thinking about paradigms in science and the issue of inference to design. For instance, we may capture this in a flowchart:

The per aspect design inference explanatory filter

It can be reduced to an algebraic expression, too.

Whereby,

a: once we have objective reason to assign the specificity variable (S) to 1,

b: can measure the information content I on a reasonable scale in bits, and

c: are beyond the 500 bit or 1,000 bit thresholds (for our solar system or the observed cosmos)

d: we can with confidence — of course within the provisionality limitations of all inductive knowledge claims —  infer that

e: complex, specified information (especially where this is functionally specified)  is best explained by choice contingency, i.e. design, if the Chi value is at least 1:

Chi_500 = I*S – 500, bits beyond the solar system threshold

Chi_1000 = I*S – 1000, bits beyond he observed cosmos threshold

Why is it that so many find it ever so hard to acknowledge what seems so obvious and well-warranted (as in, the whole Internet and world of technology stand in support as test-instances where we can directly cross-check)?

Well . . . since so many want to angrily mock and dismiss the classic Lewontin clip on materialist a priorism in science and expectations/impositions that can diverge from wider reality, let’s use a current one from Mahner, instead:

. . . metaphysical naturalism is a constitutive ontological principle of science in that the general empirical methods of science, such as observation, measurement and experiment, and thus the very production of empirical evidence, presuppose a no-supernature principle . . . .

Metaphysical or ontological naturalism (henceforth: ON) [“roughly” and “simply”] is the view that all that exists is our lawful spatiotemporal world. Its negation is of course supernaturalism: the view that our lawful spatiotemporal world is not all that exists because there is another non-spatiotemporal world transcending the natural one, whose inhabitants—usually considered to be intentional beings—are not subject to natural laws . . . .

Both scientists and science educators keep being challenged by creationists of all shades, who try hard to reintroduce supernaturalist explanations into biology and into all the areas of science that concern the origin of the world in general and of human beings in particular. A major aspect of this debate is the role of ON in science . . . .

ON is not part of a deductive argument in the sense that if we collected all the statements or theories of science and used them as premises, then ON would logically follow. After all, scientific theories do not explicitly talk about anything metaphysical such as the presence or absence of supernatural entities: they simply refer to natural entities and processes only. Therefore, ON rather is a tacit metaphysical supposition of science, an ontological postulate. It is part of a metascientific framework or, if preferred, of the metaparadigm of science that guides the construction and evaluation of theories, and that helps to explain why science works and succeeds in studying and explaining the world. [“The role of Metaphysical Naturalism in Science,” Science and Education Journal, 2011; published by Springer Science+Business Media B.V. 2011]

A priori worldview commitments like that are notorious roots of ideological blindness.

Back to the birds . . .

Let’s add something to the imaginary bird robot, by way of a design exercise towards industrial transformation and solar system colonialisation. In addition to its functions that would make it useful, let us make it into a von Neumann self-replicator (vNSR):

As Ralph Merkle outlined, such a vNSR is potentially quite useful:

[T]he costs involved in the exploration of the galaxy [or solar system] using self replicating probes would be almost exclusively the design and initial manufacturing costs. Subsequent manufacturing costs would then drop dramatically . . . . A device able to make copies of itself but unable to make anything else would not be very valuable. Von Neumann’s proposals centered around the combination of a Universal Constructor, which could make anything it was directed to make, and a Universal Computer, which could compute anything it was directed to compute. This combination provides immense value, for it can be re- programmed to make any one of a wide range of things . . . [[Self Replicating Systems and Molecular Manufacturing, Xerox PARC, 1992. (Emphases added.)]

How can that be done?

A kinematic vNSR requires:

(i) an underlying storable code to record the required information to create not only (a) the primary functional machine [[here, a Turing-type “universal computer”] but also (b) the self-replicating facility; and, that (c) can express step by step finite procedures for using the facility;
(ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with
(iii) a tape reader [[called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions; thus controlling:
(iv) position-arm implementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication (including replication of the constructor itself); backed up by
(v) either:
(1) a pre-existing reservoir of required parts and energy sources, or
(2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment.
Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor.That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist. [[Take just one core part out, and self-replicating functionality ceases: the self-replicating machine is irreducibly complex (IC).].This irreducible complexity is compounded by the requirement (i) for codes, requiring organised symbols and rules to specify both steps to take and formats for storing information, and (v) for appropriate material resources and energy sources.
Immediately, we are looking at islands of organised function for both the machinery and the information in the wider sea of possible (but mostly non-functional) configurations. In short, outside such functionally specific — thus, isolated — information-rich hot (or, “target”) zones, want of correct components and/or of proper organisation and/or co-ordination will block function from emerging or being sustained across time from generation to generation. So, once the set of possible configurations is large enough and the islands of function are credibly sufficiently specific/isolated, it is unreasonable to expect such function to arise from chance, or from chance circumstances driving blind natural forces under the known laws of nature.
Further, let us consider a tape of 1,000 bits (= 125 bytes), which is plainly grossly insufficient to specify the parts and instructions for a von Neumann replicator.
However, the number of possible configurations of 1,000 bits is 1.07 * 10^301, more than ten times the square of the 10^150 states the 10^80 atoms of our observed universe would take up across a reasonable estimate of its lifespan. So, viewing our observed universe as a search device, it would scan less than 1 in 10^150th part of even so “small” a configuration space. That is, it would not carry out a credible “search” for islands of function, making such islands sufficiently isolated to be beyond the reasonable reach of a blind search.
But also, this is in a context. 
 
For, while our technology has not been able as yet to create such a three-dimensional, real world self-replicator [[as opposed to computer cellular automaton models], in fact such devices are common: the living cell.
On the role played by mRNA, tRNA and Ribosomes in such cells, Merkle notes:
We can view a ribosome as a degenerate case of [[a Drexler] assembler [[i.e. a molecular scale von Neumann-style replicator]. The ribosome is present in essentially all living systems . . . It is programmable, in the sense that it reads input from a strand of messenger RNA (mRNA) which encodes the protein to be built. Its “positional device” can grasp and hold an amino acid in a fixed position (more accurately, the mRNA in the ribosome selects a specific transfer RNA, which in its turn was bound to a specific amino acid by a specific enzyme). The one operation available in the “well defined set of chemical reactions” is the ability to make a peptide bond [[NB: This works by successively “nudging” the amino acid-armed tip of the codon- matched tRNA in the ribosome’s A site to couple to the amino acid tip of the preceding tRNA (now in the P site) and ratcheting the mRNA forward; thus elongating the protein’s amino acid chain step by step] . . . .
[[T]he ribosome functions correctly only in a specific kind of environment. There must be energy provided in the form of ATP; there must be information provided in the form of strands of mRNA; there must be compounds such as amino acids; etc. etc. If the ribosome is removed from this environment it ceases to function.[[Self Replicating Systems and Molecular Manufacturing, Xerox PARC, 1992. (Parentheses, emphases and links added. Notice as well how the concept of functionally specific complex information naturally emerges from Merkle’s discussion.)]
That is: in living cells DNA strands of typically 100,000 to 4,000,000,000 four-state elements provide a “tape” that is transcribed and read in segments as required, by molecular machines in the cell and is also used to carry out metabolic activity: creating and organising the proteins and other molecules of life, which carry out its functions.
And the configuration space specified by just 100,000 four-state elements has 9.98 * 10^60,205 possible states. (Also, the whole observed universe, across its thermodynamically credible lifespan can only have about 10^150 atomic level, Planck-time states. )
Thus, we see a needle-in-a-haystack challenge, which becomes an insuperable task even with so unrealistically small a blueprint as 1,000 bits.
So, it is at least plausible that cell based life is an artifact of design — a conclusion that is very unwelcome in the Lewontinian materialist camp.

Now, a common attempted rebuttal or dismissal to such reasoning is to claim that spontaneous, natural chance variation [[e.g. through mutations] and natural selection work together to create new functional biological information, so that chance alone does not have to do it all in one step. In Dawkins’ terms, there is an easy, step- by- step path up the back slope of “Mt Improbable.” But, this is an error, as quite plainly the claimed source of novel biological information is the variation, not the selection. For, as Darwin himself pointed out in the introduction to Origin, “any being, if it vary however slightly in any manner profitable to itself . . . will have a better chance of surviving, and thus [[will] be naturally selected.”

However, if there is no new function that comes from natural variation – and, for the very first life form, this must include replication capacity itself (with the above requisites as analysed by von Neumann) — no natural selection of advantageous variation will be possible.
That is, CULLING-OUT of inferior varieties based on their relative disadvantage in an environment cannot – thus, does not – explain or cause the ORIGIN of the varieties and of their underlying genetically coded biological information.  
Let’s refocus: the cell uses coded, algorithmic – thus symbolically representative – information.  
But, nothing in the direct working of the four fundamental physical forces (strong and weak nuclear, electromagnetic and gravitational) provides a base for the origin of sets of symbols and rules for interpreting and applying them through abstractly representing objects and/or actions on those objects.
So, some further worldview level issues follow for evolutionary materialism:
a –> If one’s worldview is Lewontinian- style a priori, evolutionary materialism, (cf. also Johnson’s critique) then one only has access to the undirected fundamental forces of physics, the basic properties of matter, energy and time as causal factors to explain anything and everything.
b –> But undirected strong and weak nuclear, electromagnetic and gravitational forces acting on material objects that just happen to be there in one purposeless pattern or another some 13.7 BY ago, or 4.5 – 3.8 BY ago have nothing – apart from appealing to blind chance that somehow by happenstance hit on just the right configurations, that is “lucky noise” — to give rise to language (with its use of abstract symbols and meaningful rules).
c –> And, as has already been highlighted, the functionally specific complex organisation required to do that implies configuration spaces that are so vast that the entire resources of the observed cosmos, acting since the big bang, simply could not scan enough of the space to make a difference.
d –> Therefore it is fair comment to conclude that blind, undirected physical forces on the scope of our observed cosmos have no credible power to bring about meaningfully functional information, which just happens to lie at the core of self-replicating, cell-based life.
e –> One typical response of the committed evolutionary materialist is to try to deny “real” meaningfulness, coding and algorithmic step by step purposeful function in the cell. But that is simply to resort to denial of patent facts evident since the DNA’s structure and existence of the genetic code were identified. As Crick wrote to his son on March 19, 1953, right after making the epochal discovery:
“Now we believe that the DNA is a code. That is, the order of bases (the letters) makes one gene different from another gene (just as one page of print is different from another).”   
f –> Another is to claim that which we all know exists is only “apparent” design or purpose, but in reality is only the illusion of it. This of course begs the question and looks suspiciously like a closed mind: what reasonable degree of evidence would change such a mind to accept that the design is apparent for the very good reason that it is most likely real?
g –> So, we note Lewontin’s clear, all too revealing implication that for the committed evolutionary materialist no degree of evidence would ever suffice:
. . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [[“Billions and Billions of Demons,” NYRB, Jan 1997.]
h –> Plainly, we can only correct such by resort to stronger medicine: argument that in part works by reductio ad absurdum.
i –> For instance, the a priori materialist cannot deny the reality of meaningful language/ information without self-contradiction: to make the required denial is . . . to necessarily use purposeful language. (So, immediately, the materialist must concede that language is based on symbolic, purposeful representation and on rules for the meaningful combination of such symbols.)
j –> Equally, the materialist cannot coherently deny that intelligences routinely create such meaningful, symbolic information. And, once a reasonable threshold of complexity is passed we have never observed such messages and languages originating by chance processes and blind forces.

(This, for the excellent reason that such specifically functional organisation is far too isolated in the space of possible configurations to be reasonably expected to happen by chance and/or blind, undirected forces on the scope of our observed cosmos. [[And, to then resort to a proposed quasi-infinite array of “universes” is to jump to a speculation without observational evidence, i.e. philosophy not science. Worse, even such a speculation raises the question of the functional specificity and organisation of the universe-making bread factory, and the point that even in that case, the implication that our particular sub cosmos is locally finetuned would still be just as significant . . .] )

k –> But also, as Von Neumann showed, by the very need to organise a self replicating system, just such functional codes and linguistic information are in the heart of the cell. (And, without already functioning reproduction, natural selection by differential reproductive success is by definition impossible.)
l –> Thus, complex symbolic, algorithmically functional language expressed in coded symbols arranged according to rules of meaning is prior to self-replicating cell based life (and therefore any possibility of evolution by natural selection), while we only know one source for such functional complex organisation: prior intelligence.
m –> So, on inference to best explanation, evolutionary materialistic naturalism fails.
But, such news is not welcome in an age dominated by just such a priori evolutionary materialistic naturalism. Consequently, the controversy over signs of intelligence and inference to design is not unexpected.
However, it is not at all unusual in the history of science to see controversies over emerging theories that may well unsettle a long established scientific order. And, plainly, the theory of Intelligent Design (ID) is not simply the easily dismissed and demonised “Creationism in a cheap tuxedo” or “thin edge of a theocratic wedge” of ever so much anti-ID rhetoric.

Obviously, the first test case is origin of cell-based life, which requires/uses such a vNSR mechanism, with all the irreducible complexity it entails. Such life is also critically dependent on the manufacture of proteins, which again uses a digital coded information mechanism. And, with the implication that until this is in place, we have no basis for the reproduction that is the first part of the proposed Darwinian type mechanisms for macro-evolution.

This, DV, we will follow up at a later date.

So, before we close off, let us look a little at the potential for industrial and economic transformation implicit in the ideas we have been discussing.

Let us note how Merkle, in suggesting that vNSR’s would be able to transform exploration or colonialisation of the galaxy (or more realistically the solar system), noted on how once the process begins, we have an industrial base that sustains itself once we can feed in raw materials and components as required.

This, we see all around us in the world of life.

Now, mix in Marcin Jakubowski’s Global Village Construction Set, as he highlighted in a TED talk:

[youtube zIsHKrP-66s]

So, if we do identify appropriate digital specifications and sources of energy, raw materials and components, we can create a new industrial ecosystem, one that would be modular and of essentially village or small town scale.

Now, bring to bear our vNSR bird bots, duly loaded with the specifications.

Immediately, we see a potential catalytical technology that would erase the digital, technological and have/have not divides.

And in so thinking, we see as well how intelligently directed origin of an ecosystem, and innovations in it, through coded, digitally stored, functionally specific, complex information, are in principle feasible. Indeed, desirable, to move beyond the world of haves and have nots.

That is, we have excellent reason to see that an ecology of self-replicating systems that can diversify across time is an eminently plausible and reasonably achievable project for this century ahead.

But, there is a sting in the tail.

For, if that is feasible, why could not a more advanced technical base do the same with carbon chemistry, aqueous medium nanotechnology?

In short, why — apart from ideological a priori materialism — is it such a no-no to even conceive that the world of life we see and are a part of could have been shaped by designers applying advanced nanotechnology?

And, given that simply thinking about such design approaches opens up serious alternatives for science, technology, economy, and civilisation, why is it so often so harshly asserted that design thinkers are inevitably, inescapably anti-science, and anti-progress?

Let us think and let us discuss, over some of that Inductive Turkey. END

Comments

Leave a Reply