Uncommon Descent Serving The Intelligent Design Community

But, what if the Cambrian robot is self-replicating?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Dr Liddle, commenting on the Cambrian Robot thread (itself a takeoff on the Pre- Cambrian Rabbit thread), observes at comment no 5:

the ribosome is part of a completely self-replicating entity.

The others aren’t.

The ribosome didn’t “make itself” alone but the organism that it is a component of was “made” by another almost identical organism, which copied itself in order to produce the one containing the ribosome in question.

It is probably true that the only non-self-replicating machines are those designed by the intelligent designers we call people.

But self-replication with modification, I would argue, is the alternate explanation for what would otherwise look like it was designed by an intelligent agent.

I don’t expect you to agree, but it seems to me it’s a point that at least needs to be considered . . .

The matter is important enough to be promoted to a full post — UD discussion threads can become very important. So, let us now proceed . . .

In fact, the living cell implements a kinematic von Neumann Self Replicator [vNSR], which is integrated into a metabolising automaton:

Fig. A: A kinematic vNSR, generally following Tempesti’s presentation (Source: KF/IOSE)

Why is that important?

First, such a vNSR is a code-based, algorithmic, irreducibly complex object that requires:

(i) an underlying storable code to record the required information to create not only
(a) the primary functional machine [for a “clanking replicator” as illustrated in Fig. A, a Turing-type “universal computer”; in a cell this would be the metabolic entity that transforms environmental materials into required components etc.] but also
(b) the self-replicating facility; and, that
(c) can express step by step finite procedures for using the facility;
(ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with
(iii) a tape reader [[called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions; thus controlling:
(iv) position-arm implementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication (including replication of the constructor itself); backed up by
(v) either:
(1) a pre-existing reservoir of required parts and energy sources, or
(2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment. 

Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor

That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist. [Take just one core part out, and self-replicating functionality ceases: the self-replicating machine is irreducibly complex (IC).]

This irreducible complexity is compounded by the requirement (i) for codes, requiring organised symbols and rules to specify both steps to take and formats for storing information, and (v) for appropriate material resources and energy sources.

Immediately, we are looking at islands of organised function for both the machinery and the information in the wider sea of possible (but mostly non-functional) configurations.

In short, outside such functionally specific — thus, isolated — information-rich hot (or, “target”) zones, want of correct components and/or of proper organisation and/or co-ordination will block function from emerging or being sustained across time from generation to generation. So, once the set of possible configurations is large enough and the islands of function are credibly sufficiently specific/isolated, it is unreasonable to expect such function to arise from chance, or from chance circumstances driving blind natural forces under the known laws of nature.

Q: Per our actual direct observation and experience, what is the best explanation for algorithms, codes and associated irreducibly complex clusters of implementing machines?

A: Design as causal process, and thus such entities serve as signs pointing to design and behind design, presumably one or more designers. Indeed, presence of such entities would normally count in our minds as reliable signs of design. And thence, as evidence pointing to designers, the known cause of design.

So, why is this case seen as so different?

Precisely because these cases are in self-replicating entities. That is, the focus is on the chain of descent from one generation to the other, and it is suggested that sufficient variation can be introduced by happenstance and captured across time by  reproductive advantages that one or a few original ancestral cells can give rise to the biodiversity we see ever since the Cambrian era.

But is that really so, especially once we see the scope of involved information in the context of the available resources of the atoms in our solar system or the observed cosmos — the cosmos that can influence us in a world where the speed of light seems to be a speed limit?

William Paley, in Ch II of his Natural Theology (1806) provides a first clue. Of course, some will immediately object to the context, a work of theology. But in fact good science has been done by theologians and good science can appear in works of theology [just as how some very important economics first appeared in what has been called tracts for the times, e.g. Keynes’ General Theory], so let us look at the matter on the merits:

Suppose, in the next place, that the person who found the watch should after some time discover that, in addition to all the properties which he had hitherto observed in it, it possessed the unexpected property of producing in the course of its movement another watch like itself — the thing is conceivable; that it contained within it a mechanism, a system of parts — a mold, for instance, or a complex adjustment of lathes, baffles, and other tools — evidently and separately calculated for this purpose . . . .
The first effect would be to increase his admiration of the contrivance, and his conviction of the consummate skill of the contriver. Whether he regarded the object of the contrivance, the distinct apparatus, the intricate, yet in many parts intelligible mechanism by which it was carried on, he would perceive in this new observation nothing but an additional reason for doing what he had already done — for referring the construction of the watch to design and to supreme art . . . .
He would reflect, that though the watch before him were, in some sense, the maker of the watch, which, was fabricated in the course of its movements, yet it was in a very different sense from that in which a carpenter, for instance, is the maker of a chair — the author of its contrivance, the cause of the relation of its parts to their use. [Emphases added.]

So, the proper foci are (i) the issue of self-replication as an ADDITIONAL capacity of a separately functional, organised complex entity, and (ii) the difference between generations 2, 3, 4 . . . and generation no 1. That is, first: once there is a sub-system with the stored information and additional complex organisation and step by step procedures to replicate an existing functional complex organised,  entity then this is an additional case of FSCO/I to be explained.

Hardly less important, the key issue is not the progress from one generation of self-replication to the next, but the origin of such an order of system: “the [original] cause of the relation of its parts to their use.”

Third, we do need to establish that cumulative minor variations and selection on functional and/or reproductive advantages, would surmount the barrier of information generation without intelligent direction, especially where codes and algorithms are involved.

In the case of the Ribosome in the living cell these three levels interact:

 

Fig. B: Protein synthesis and the role of the ribosome as “protein amino acid chain factory” (Source: Wiki, GNU. See also a medically oriented survey here. )

Fig. C: The Ribosome in action, as a digital code-controlled “protein amino acid chain factory.” Notice the role of the tRNA as an amino acid [AA] taxi and as a nano-scale position-arm device with a tool-tip, i.e. a robot-arm. Also, notice the role of mRNA as an incrementally advanced digitally coded step by step assembly control tape.  (Source: Wiki, public domain.)

But — as Dr Liddle suggested —  isn’t this just a matter of templates controlled by the chemistry, much as the pebbles on Chesil beach, UK, grade based on whether a new pebble will fit the gaps in the existing pile of pebbles? And, what is a “code” anyway?

Let us begin from what a code is, courtesy Collins English Dictionary:

code, n
1. (Electronics & Computer Science / Communications & Information) a system of letters or symbols, and rules for their association by means of which information can be represented or communicated for reasons of secrecy, brevity, etc. binary code Morse code See also genetic code
2. (Electronics & Computer Science / Communications & Information) a message in code
3. (Electronics & Computer Science / Communications & Information) a symbol used in a code
4. a conventionalized set of principles, rules, or expectations a code of behaviour
5. (Electronics & Computer Science / Communications & Information) a system of letters or digits used for identification or selection purposes . . .

[Collins English Dictionary – Complete and Unabridged © HarperCollins Publishers 1991, 1994, 1998, 2000, 2003]

Immediately, we see that codes are complex, functionally organised information-related constructs, designed to store or convey (and occasionally to conceal) information. They are classic artifacts of design, and given the conventional assignment of symbols and rules for their association to convey meaning by reference to something other than themselves, we see why: codes embed and express intent or purpose and what philosophers call intentionality.

But at the same time, codes are highly contingent arrangements of elements, so could they conceivably be caused by chance and/or natural affinities of things we find in nature? That is, couldn’t rocks falling off the cliff at the cliff end of Chesil beach spontaneously arrange themselves into a pile saying: “Welcome to Chesil beach?”

This brings up the issues of depth of isolation of islands of function in a space of possible configurations, and it brings up the issue of meaningfulness as a component of function. Also lurking is the question of what we deem logically possible as a prior causal state of affairs.

Of course, any particular arrangement of pebbles is possible, as pebbles are highly contingent. But if we were to see beach pebbles at Chesil (or the fairly similar Palisadoes Jamaica long beach leading out to Port Royal and forming the protecting arm for port Kingston) spelling out the message just given or a similar one, we would suspect design tracing to an intelligence as the most credible cause. This is because the particular sort of meaningful, functional configuration just seen is so deeply isolated in the space of possibilities for tumbling pebbles, that we instinctively distinguish logical possibility from practical observability on chance plus blind natural mechanisms, vs an act of art or design that points to an artist or designer with requisite capacity.

But that is in the everyday world of observables, where we know that such artists are real. In the deep past of origins, some would argue, we have no independent means of knowing that such designers were possible, and it is better to infer to natural factors however improbable.

But in this lurks a cluster of errors. First, ALL of the deep past is unobservable so we are inferring on best explanatio0n from currently observed patterns to a reasonable model of the past.

Second, on infinite monkeys analysis grounds closely related to the foundations of the second law of thermodynamics, we know that such configurations are so exceedingly isolated in vast sets of possibilities that it is utterly implausible on the gamut of the observed cosmos. Such a message is not sufficiently likely to be credibly observable by undirected chance and necessity.  And from experience we know that the sets of symbols and rules making up a code are well beyond the comp0lexity involved in 125 byes of information. That is, 1.07 * 10^301 possibilities, more than ten times the square of the 10^150 or so possibilities for Planck time quantum states of the 10^80 atoms of our observed universe.

Third, there is a lurking begging of the question: in effect the assertion of so-called methodological naturalism would here block the door to the possibility of evidence pointing to the logically possible state of affairs that life is the product of design.  We can see this from a now notorious declaration by Lewontin:

To Sagan, as to all but a few other scientists, it is self-evident that the practices of science provide the surest method of putting us in contact with physical reality, and that, in contrast, the demon-haunted world rests on a set of beliefs and behaviors that fail every reasonable test . . . .

It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. 

[From: “Billions and Billions of Demons,” NYRB, January 9, 1997. Bold emphasis added.]

That looks uncommonly like closed-minded question-begging, and — as Johnson pointed out — falls of its own weight, once it is squarely faced. We can safely lay this to one side.

But also, there is a chicken-egg problem. One best pointed out by clipping from a bit further along in the thread at 30 (which clips from Dr Liddle at 26):

the common denominator in the “robots” itemized in the OP, it seems to me, is that they are products of decision-trees in which successful prototypes are repeated,

a: as created by intelligent designers, in a context of coded programs used in their operation

usually with variation, and less successfuly prototypes are abandoned.

b: Again, by intelligently directed choice

In two of the cases, the process is implemented by intelligent human beings, who do the replicating externally, as it were (usually), set the criteria for success/failure, and only implement variants that have a pre-sifted high probability of success.

c: In short, you acknowledge the point, i.e that it is intelligence that is seen empirically as capable of developing a robot (and presumably the arms and legs of Fig A are similar to the position arm device in B).

In the third case (the ribosome) I suggest that the replicating is intrinsic to the “robot” itself,

d: The problem here is the irreducible complexity in getting to function, as outlined again just above.

in that it is a component within a larger self-replicating “robot”;

e: it is a part of both the self-replicating facility and the metabolic mechanism, and uses a position-arm coded entity the tRNA that key lock fits the mRNA tape that is advanced step by step in protein assembly, and has as well a a starting and a halting process.

the criteria for success/failure is simply whether the thing succeeds in replicating,

f: The entity has to succeed at making proteins, which are in turn essential to its operations, i.e this is chicken-egg irreducible complexity. I gather something like up to 75 helper molecules are involved

and the variants are not pre-sifted so that even variants with very little chance of success are produced, and replicate if they can.

g: If the ribosome does not work right the first time, there can be no living cell that can either metabolise or self-replicate.

h: For that to happen, the ribosome has to have functioning examples of the very item it produces — code based, functioning proteins.

i: In turn, the DNA codes for the proteins have to be right, and have to embrace the full required set, another case of irreducible complexity.

j: In short, absent the full core self replicating and metabolic system right from get-go, the system will not work.

But in both scenarios, the result is an increasingly sophisticated, responsive, and effective “robot”.

k: This sort of integrated irreducible complexity embedding massive FSCO/I has only one observed solution and cause: intelligent design.

l: In addition, the degree of complexity involved goes well beyond the search resources of the observed cosmos to credibly come up with a spontaneous initial working config.

m: So, we see here a critical issue for the existence of viable cell based life, and it points to the absence of a root for the darwinian style tree of life.

So, once we rule out a priori materialism, and allow evidence that is an empirically reliable sign of intelligence to speak for itself, design becomes a very plausible explanation for the origin of life.

What about the origin of major body plans?

After all, isn’t this “just” a matter of descent with gradual modification?

The problem here is that this in effect assumes that all of life constitutes a vast connected continent of functional forms joined by fine incremental gradations. What is the directly observed evidence for that? NIL, the fossils — the only direct evidence of former life forms — notoriously are dominated by suddenness of appearance, stasis of form and disappearance or continuation into the modern era. That’s why Gould et al developed punctuated equlibria.

But the problem is deeper than that: we are dealing with code based, algorithmic entities.

Algorithms and codes are riddled with irreducible complexities and so don’t smoothly grade from say a Hello World to a full bore operating system. Nor can we plausibly plead that more sophisticated code modules naturally emerge from the Hello World module through chance variation and trial and error discovery of novel function so that the libraries can then be somehow linked together to form more sophisticated algorithms. Long before we got to that stage the 125 byte threshold would have repeatedly been passed.

In short, origin of major new body plans by embryogenesis is not plausibly explained on chance plus necessity, i.e. we do not have a viable chance plus necessity path to novel body plans.

To my mind, this makes design a very plausible — and arguably the best — causal explanation for the origin of both life and novel body plans.

So, what do you think of this? Why? END

Comments
Elizabeth Liddle, And the result is often really very effective “critters” that appear to have been “ingeniously” devised. However, the only “ingenuity” involved was setting up the thing in the first place. The writer of the GA is not responsible for the winning “design”. But that statement is entirely dependent on the writer's knowledge, intentions, actions during the program, and otherwise. If the writer (let's say programmer) set up the environment to spawn particular results, responsibility is added. If the programmer intervened in the program, responsibility is added. If the programmer did not intervene but nevertheless knew just where the program would end up, responsibility is added. That’s the other big difference (as I think I pointed out on the other thread) between living things and human-designed robots. Human designs tend to be “brittle”, both metaphorically and actually (and the two things are somewhat related). "Tend to be"? What about self-replicating computer programs? You know, like GAs. While it's probably not the point kairos was trying to make, what I'd note here is that your statement depends heavily on current technological limitations of humans. And even there the situation is complex - we've been "designing" organisms for a long time with selective breeding. Now, that's not entirely fair, since you're drawing the distinction between human-designed robots and living organisms - I think GAs and similar software could reasonably fit in there, but I'll even put that aside. But still, the objection here comes down to "yes, but our technology is inferior". So I have to ask: And what happens if, or even when, our technological capabilities converge more on what's seem in nature? Would you say that would strengthen a design inference with regards to biology, particularly evolutionary biology? Plus, the big incentives in science aren’t to support the status quo but to overturn it. That’s where the Nature papers and the Nobel prizes are. I'm pretty sure Nature papers and Nobel prizes are sometimes, even typically, written by and given to scientists who supported the status quo. Not to mention that not every status quo is purely scientific - see Lysenkoism. Really - the history of science is not nearly this ideal. There is a reason Max Planck made this quote: "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it".nullasalus
May 30, 2011
May
05
May
30
30
2011
11:41 AM
11
11
41
AM
PDT
Meleager: I think you are peddling a myth :) Now it may well be that we "Darwinists" are wrong, and eventually the ID argument may prove persuasive. But really, we are not the blind egotists you seem to think we are! To us, the story does, actually, make sense! Plus, the big incentives in science aren't to support the status quo but to overturn it. That's where the Nature papers and the Nobel prizes are. Seriously, we neither have horns nor are we stupid! But we do disagree with you, that's for sure. I'm grateful to be given the opportunity to find out just where and why. Cheers LizzieElizabeth Liddle
May 30, 2011
May
05
May
30
30
2011
11:14 AM
11
11
14
AM
PDT
@Brent, #3 I'll chip in in response here, but I haven't forgotten the OP!
Thanks, kairosfocus! I read Dr. Liddle’s comment on the other thread and just couldn’t believe it. How does appealing to the added complexity of self-replication help her argument? So, we know it takes humans to create robots with complexity, but with ADDED complexity, robots are probably just naturally occurring???
OK, so why does self-replication matter? Well, although self-replication can be (and, in the case of modern living things, is) incredibly complex, at it's simplest, it needn't be. For example, if you look at frost patterns on a window pain, you are looking at a very simple example of self-replication - a pattern begins, possible because of a speck of dust on the window, and that pattern spawns a copy, which spawns a copy, etc until you have a repeating pattern stretching across the glass. That means that if a very simple "probiont", consisting perhaps of no more than lipid bubbles going through cycles of enlargement, driven by, for example nothing more complex than convection currents and osmotic forces, you've got something that is potentially, I would argue, a "self-designing system". The reason being, and this was Darwin's key insight, even though he had very little idea of the mechanics, that when you have self-replicators replication with variance, and where some variants replicate better than others, little by little the best replicators, even though they may have gained their "edge" purely by virtue of stochastic events, well tend to dominate the population. And we know that this does in fact work, because it's exactly how GAs work. You start with a very simple self-replicator, and you let it replicate with random variation for many generations, in an "environment" in which some variants will replicate better than others. And the result is often really very effective "critters" that appear to have been "ingeniously" devised. However, the only "ingenuity" involved was setting up the thing in the first place. The writer of the GA is not responsible for the winning "design". So it's not that I'm "adding" complexity by invoking self-replication (although in the case of human design, making a self-replicating robot would of course be a far greater challenge than making a non-self-replicating robot). What I'm saying is if you start with a population of self-replicators, you will tend to get a self-designing system. I will add one important proviso, though - the seed self-replicators, although they can be very simple, must not be "brittle". In other words, there must be a reasonable number of possible variants that won't actually break the thing. That's the other big difference (as I think I pointed out on the other thread) between living things and human-designed robots. Human designs tend to be "brittle", both metaphorically and actually (and the two things are somewhat related). Whereas a bit of DNA can be subjected to repeats or deletions or substitutions and still result in some protein or other, or some signal or other. OK, I have to go out for dinner now, but I promise I'll read the OP more thoroughly in the next couple of days, and try to post a response.Elizabeth Liddle
May 30, 2011
May
05
May
30
30
2011
11:10 AM
11
11
10
AM
PDT
Darwin believed that the basic components of life were basically primordial "goo" (I remember my old biology textbook; protoplasm, nucleus, cell wall) that in aggregate would somehow generate life due to their innate natural qualities. If we could send back in time the 3D animations that faithfully recreate the vast, factory-like operation and nanotechnology that goes on and is present in a single cell, I think Darwin would probably burn his manuscript and make quick for the nearest church. It is only after a century and a half of being acclimated to Darwinism and desensitized to the growing, sheer nonsense of it in the face of modern knowledge that keeps it from being outed as nothing more than materialist, victorian-age myth. Well, that and the fact that so many reputations and egos would be shattered, and culture would dramatically shift in response.Meleagar
May 30, 2011
May
05
May
30
30
2011
09:37 AM
9
09
37
AM
PDT
Thanks, kairosfocus! I read Dr. Liddle's comment on the other thread and just couldn't believe it. How does appealing to the added complexity of self-replication help her argument? So, we know it takes humans to create robots with complexity, but with ADDED complexity, robots are probably just naturally occurring???Brent
May 30, 2011
May
05
May
30
30
2011
07:31 AM
7
07
31
AM
PDT
A code (remotely stored) a storage medium, a retrival mechanism, a translation system, a sense and error-correction system (one could go on) all working in concert and all accessible, first, as concept mapped into the code. No one should be debating, any longer, if the idea of design is warranted - Dawin was wrong. Only philosophical considerations, or deeply held prejudice would reject the idea...clearly empirical science has little to do with it. Is it any wonder we laymen have come to veiw the educated classes (indeed the institution of education itself) with a great deal of skepticism? What most call science these days seems little more than materialist philosophy -a philosophy wielding the hammer it calls "science" and using that to enforce its philosophical hegemony.arkady967
May 30, 2011
May
05
May
30
30
2011
06:43 AM
6
06
43
AM
PDT
An interesting and complex essay, kairofocus, and it will take me some time to digest! Unfortunately I am busy for the rest of today, and back to work tomorrow, but I will try to do some sort of justice to your OP within the next few days. In the mean time I look forward to reading other responses.Elizabeth Liddle
May 30, 2011
May
05
May
30
30
2011
03:44 AM
3
03
44
AM
PDT
1 2

Leave a Reply