Uncommon Descent Serving The Intelligent Design Community

ID Foundations 15(c) — A FAQ on Front-Loading, thanks to Genomicus

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Onlookers, Geno concludes for the moment with FAQ’s:

____________________

Geno: >>

A Testable ID Hypothesis: Front-loading, part C

In the last two articles on front-loading, I explained what the front-loading hypothesis is all about and some research questions we can ask from a front-loading perspective. This article will be an FAQ about the front-loading hypothesis. So, without further introduction, let’s begin (note: some of the content of this FAQ can be found in the previous two articles).

  1. What is front-loading?

Front-loading is the investment of a significant amount of information at the initial stage of evolution (the first life forms) whereby this information shapes and constrains subsequent evolution through its dissipation. This is not to say that every aspect of evolution is pre-programmed and determined. It merely means that life was built to evolve with tendencies as a consequence of carefully chosen initial states in combination with the way evolution works.” Mike Gene, The Design Matrix: A Consilience of Clues, page 147

In short, this ID hypothesis proposes that the earth was, at some point in its history, seeded with unicellular organisms that had the necessary genomic information to shape future evolution. Necessarily, this genomic information was designed into their genomes.

 

  1. How is front-loading different from directed panspermia?

 

In a paper published in the journal Icarus, Francis Crick and Leslie Orgel proposed the hypothesis of directed panspermia. According to this hypothesis, the earth was intentionally seeded with life forms by some intelligence. The front-loading hypothesis goes a step further and proposes that these life forms contained the necessary genomic information to shape the course of future evolution. For example, the origin of metazoan complexity would have been planned and anticipated by the genomic information in the first genomes. Thus, the front-loading hypothesis is inherently teleological and an ID hypothesis.

 

  1. Does front-loading propose that all the genes found in life were in the first life forms?

 

No, it does not. Front-loading does not suggest that all genes were there from the start. Indeed, many genes found in modern life forms are probably the result of purely unplanned mechanisms (gene duplication and subsequent divergence, for example). Nevertheless, genes essential for the origin and development of the metazoan body plan would be present in the first genomes (or have homologs in the first genomes).

 

  1. If genes necessary for the origin of metazoan life forms were placed, then random mutation would have destroyed them and they would decay, right?

 

This is a common objection to the front-loading hypothesis, but it can be easily answered. These genes would be given an important function in the first life forms, such that they would be preserved across deep time. Front-loading doesn’t involve much of simply turning genes (that were previously unexpressed) on at some given time.

 

  1. How could sophisticated molecular systems be front-loaded?

There are two basic solutions to the problem of front-loading sophisticated molecular systems, but more research is needed so that we can find out exactly how these solutions would work in practice. In theory, however, there’s the “bottom up” approach and the “top down” approach to front-loading molecular systems. In the “bottom up” approach, the original cells would contain the components of the molecular machine we want to front-load, but these components would be carrying out functions not related to the function of the molecular machine. Then, somehow (here’s where we need research), something causes them to associate such that they fit nicely with each other, forming a novel molecular machine.

The “top down” approach proposes that the first cells had a highly complex molecular machine, composed of, say, components A, B, C, D, E, F, G, H, and J. If we want to front-load a molecular machine composed of components A, B, C, and D, then this highly complex molecular machine contains a functional subset of A, B, C, and D. In other words, components E, F, G, H, and J would simply have to be deleted from the highly complex molecular machine, resulting in a molecular machine composed of A, B, C, and D. This model is actually testable. Under this model, we would tentatively predict that a homologous system of a molecular machine will be more complex if it is more ancient than the molecular machine.

  1. What testable predictions does the front-loading hypothesis make?

There are several testable predictions the front-loading hypothesis makes:

  1. Cytosine deamination. Of the three bases in DNA (adenine, guanine, and cytosine) that are prone to deamination, cytosine is the most likely to undergo deamination. This ultimately results in a C –> T transition. Cytosine deamination often causes severe genetic diseases in humans, so why would a front-loader choose cytosine as a base in DNA? It has been observed that C –> T transitions result in a pool of strongly hydrophobic amino acids, which leads to the following prediction from a front-loading perspective: a designer would have chosen cytosine because it would facilitate front-loading in that mutations could be channeled in the direction of increased hydrophobicity. This prediction would be confirmed if key protein sequences in metazoan life forms were the result of numerous C –> T transitions.
  2. The genetic code. The front-loading hypothesis proposes that the universal optimal genetic code was present at the dawn of life: in other words, we won’t find precursors of sub-optimal genetic codes, because the genetic code was optimal from the start. Further, the front-loading hypothesis predicts that all 20 amino acids would have been used in the first life forms, and that the transcription, translation, and proof-reading machinery would have all been present at the start of life on earth.
  3. Biological complexity. Front-loading predicts that the last universal common ancestor (LUCA) was quite complex, complete with genes necessary for the origin and development of metazoan life forms.
  4. Protein sequence conservation. In eukaryotes, there are certain proteins that are extremely important. For example, tubulin is an important component of cilia; actin plays a major role in the cytoskeleton and is also found in sarcomeres (along with myosin), a major structure in muscle cells; and the list could go on. How could such proteins be front-loaded? Of course, with some of these proteins they could be designed into the initial life forms, but some of them are specific to eukaryotes, and for a reason: they don’t function that well in a prokaryotic context. For these proteins, how would a designer front-load them? Let’s say X is the protein we want to front-load. How do we go about doing this? Well, firstly, we can design a protein, Y, that has a very similar fold to X, the future protein we want to front-load. Thus, a protein with similar properties to X can be designed into the initial life forms. But what is preventing random mutations from basically destroying the sequence identity of Y, over time, such that the original fold/sequence identity of Y is lost? To counter this, Y can also be given a very important function so that its sequence identity will be well conserved. Thus, we can make this prediction from a front-loading perspective: proteins that are very important to eukaryotes, and specific to them, will share deep homology (either structurally or in sequence similarity) with prokaryotic proteins, and importantly, that these prokaryotic proteins will be more conserved in sequence identity than the average prokaryotic protein. Darwinian evolution only predicts the first part of that: it doesn’t predict that part that is in bold text. This is a testable prediction made exclusively by the front-loading hypothesis.

 

  1. Does the front-loading hypothesis suggest that evolution was programmed?

 

No. Front-loading does not propose that all biological innovations were the result of planning and teleology.

 

Conclusion

 

The more I discuss front-loading with its opponents and proponents, the more I will add to this FAQ. Please add any questions, comments, etc., below.

 

About me

Over the years, I have become quite interested in the discussion over biological origins, and I think there is “something solid” behind the idea that teleology has played a role in the history of life on earth. When I’m not doing multiple sequence alignments, I’m thinking about ID and writing articles on the subject, which can be found on my website, The Genome’s Tale.

I am grateful to UD member kairosfocus for providing me with this opportunity to make a guest post on UD. Many thanks to kairosfocus.

Also see The Design Matrix, by Mike Gene.  >>

____________________

So, here we have one specific model for how ID could possibly have been done. Obviously, not the only possibility, but a significant one worthy of investigations. END

Comments
To all evos who are complaining/ objecting about this (front-loading) hypothesis- Please produce a testable hypothesis for your position- complete with predictions and falsifications. Until you do that your "complaints" are nothing but a child's whine. IOW show us how it is done with the reingning paradigm. Or continue to act like little babies- your choice.Joe
February 4, 2012
February
02
Feb
4
04
2012
06:22 AM
6
06
22
AM
PDT
@Genomicus, If you want to pick out one point that is central to my objections, please look at the question of falsification. For your predictions (and if you start with just one, that's fine), what are the falsifications for your predictions? That's a good place to focus, I think, because it will naturally work toward excluding tautologies in your predictions -- a definition can't be falsified. For example: "GFLT predicts the nanodesigner would use cytosine as a base". That suggests a very simple scenario for falsification: cytosine is NOT found as a base for DNA in nature. So far so good. Since we understand from our observations that cytosine IS a base, all we have left is to show that cytosine as a choice is a NECESSARY product of your hypothesis. "The nanodesigner had to chose cytosine due to...", etc. Get to that point, and you're golden. But falsification I'd say is the single best question here to look at, at this point.eigenstate
February 4, 2012
February
02
Feb
4
04
2012
06:17 AM
6
06
17
AM
PDT
eig: "The “int x” declares storage space for a 16 bit number, and the “=4? tells the compiler to generate the bit pattern for 4 (“0000000000001100?) and copy it to the location for the variable “x” we just allocated." int x = 4 var a = 10 var TGG = Tryptophan c = 299792458m/s You can swap your int x =4 with anyone of the variables and the story is the same. How would the protocol necessary for the origin of the genetic code get established by chance-necessity?junkdnaforlife
February 4, 2012
February
02
Feb
4
04
2012
03:33 AM
3
03
33
AM
PDT
eigenstate: Thanks for your comments. You've posted quite a few rather lengthy comments, and I haven't got all the time in the world (I'll see if I can work on a response though); so, for the moment, is there any one comment of yours that you'd especially like me to address?Genomicus
February 4, 2012
February
02
Feb
4
04
2012
01:02 AM
1
01
02
AM
PDT
Eigenstate, Well put. I agree wholeheartedly.champignon
February 3, 2012
February
02
Feb
3
03
2012
10:59 PM
10
10
59
PM
PDT
@champignon,
A slight quibble: It’s not necessarily a problem if a theory makes a prediction that is already known to be true (aka a ‘postdiction’). General relativity ‘postdicted’ the precession of Mercury’s orbit, yet that was one of its most spectacular confirmations.
Yes, I think I used the precession of Mercury's perihelion as an example with Genomicus here on this thread. The "arrow" matters -- you've used "postdiction", which may be a more effective term, I grant. With Mercury's orbit, the chain wasn't "observe precession => adjust GR accordingly". GR entailed that prediction, as you know. The complaint from me about "already known" arises when our observations DRIVE changes to the hypothesis. Changing the hypothesis is OK, too, but that resets the process; you have another hypothesis, and the original one has been superceded.
Imagine a scenario in which GFLT necessarily entailed the use of cytosine as a base but evolutionary theory somehow predicted a different base. In that case, the confirmed prediction would count in GFLT’s favor, even though we already know that cytosine is a DNA base.
Absolutely. And right or wrong, that would be a point of substance in ID's favor. There, we would have a contest, something we could (possibly) resolve empirically. Postdictively, even. In your scenario, though, the "arrow" goes: GFLT=>cytosine entailed => cytosine observed. Given what Genomicus is proposing, the arrow goes the wrong way: cytosine observed => GFLT.
That said, I agree with the gist of your comment. Genomicus needs to come up with distinct falsifiable predictions, and show that they are confirmed, before GFLT will be taken seriously.
Roger. I take it seriously now, or wouldn't bother to comment (and am sure in that sense, you do, too). GFLT needs to be "at risk" in its predictions, to carry scientific weight. That risk can be 'retroactively applied', like the precession of Mercury's perihelion under GR, but the risk has to be derived from the hypothesis' internal resources, not determined by existing facts and conditions. Nothing in GR is "patched" to account for the precession; it flows necessarily, unavoidably from the model. Even if we were previously aware of Mercury's orbit dynamics (and we were), GR necessarily entails that outcome, without having to take counsel of that particular observation.eigenstate
February 3, 2012
February
02
Feb
3
03
2012
10:51 PM
10
10
51
PM
PDT
Hi eigenstate,
For example, if you say: “GFLT predicts the use of cytosine as a base”, we immediately are arrested by the prospects of falsification: we already KNOW cytosine is a DNA base. Your prediction cannot be false!
A slight quibble: It's not necessarily a problem if a theory makes a prediction that is already known to be true (aka a 'postdiction'). General relativity 'postdicted' the precession of Mercury's orbit, yet that was one of its most spectacular confirmations. Imagine a scenario in which GFLT necessarily entailed the use of cytosine as a base but evolutionary theory somehow predicted a different base. In that case, the confirmed prediction would count in GFLT's favor, even though we already know that cytosine is a DNA base. That said, I agree with the gist of your comment. Genomicus needs to come up with distinct falsifiable predictions, and show that they are confirmed, before GFLT will be taken seriously.champignon
February 3, 2012
February
02
Feb
3
03
2012
10:36 PM
10
10
36
PM
PDT
Hey Genomicus, It's great if you respond to the other things I've said in this thread, but here's a question that kind of eclipses all that, and goes right to the heart of the problem here (as I see it): For your predictions, what are the conditions for falsification? That is, for each of your predictions, what should we NECESSARILY expect to find in our observations if your hypothesis is NOT correct? I think if you can provide the criterion for falsification, you will have gotten somewhere with this. I think it will also illustrate effectively the problems you have in your predictions right now. For example, if you say: "GFLT predicts the use of cytosine as a base", we immediately are arrested by the prospects of falsification: we already KNOW cytosine is a DNA base. Your prediction cannot be false! That is maybe an effective way to put some substance in your predictions. If your predictions can't be falsified, then you have to start over. Evolution, for example, predicts that you will find the most complex fossils and body plans in later strata, and more rudimentary forms in the earlier strata. That is what we have found, so far, but we are always discovering new fossils. If we were to start finding lots of rabbit fossils in the pre-Cambrian layers, this prediction of evolution would be falsified. With your "aliens chose cytosine", even though we know cytosine is a base, finding "alternative DNA" that used some other choice would not falsify your prediction, because that could just be another choice made by these nanotechnologists for different purposes (we know not what). Anyway, I suggest that's the shortest route out of the thicket here. What do you see as the conditions we must encounter to falsify each of the predictions you offer?eigenstate
February 3, 2012
February
02
Feb
3
03
2012
09:36 PM
9
09
36
PM
PDT
@Genomicus,
Thus, it would make sense for us to propose a possible rational reason for why a nanotechnologist would choose cytosine as a base in the DNA molecule.
OK, perhaps that would be one of the working choices. Given what we know now, that seems to be a plausible choice, for sure. But "making sense as a choice" doesn't get you to a prediction, Genomicus. You can suppose such a thing to be the case, but you can't produce a scientific prediction from that. Try thinking about it this way: Why should we not suppose the nanotechnologist designers made a DIFFERENT choice than cytosine? If that's true, then the evidence we have in front of us would disconfirm your hypothesis! "Wait, but that's not what we see, so it must have been cytosine!", perhaps you'd like to retort. Well, that's the error, right there. You're working backwards, if so. Predictions are "forward". Looking at cytosine in biology and supposing that was the designer's choice is working "backward". What you are really saying here is THIS: 1. Cytosine is one of the four bases for DNA. 2. DNA was designed by nanotechnologists. 3. ERGO, nanotechnologists chose cytosine as one of the four bases for DNA. Can you see how you've got your "arrow" pointing the wrong direction?eigenstate
February 3, 2012
February
02
Feb
3
03
2012
09:23 PM
9
09
23
PM
PDT
@Genomicus,
The front-loading hypothesis is a direct extension of the directed panspermia hypothesis proposed by Crick and Orgel in 1973. Read their paper, and perhaps you’ll have a better understanding of what’s going on here.
OK, thanks, am familiar. But there's nothing in that work to support the predictions you are making here, or predictions of this *type*. You must provide a model, a working framework of some kind for the front loading in order to produce predictions that necessarily proceed from it, and are novel.
Directed panspermia proposes that the earth was intentionally seeded with life forms from a distant location. It does not propose that the designers were present on earth constantly trying new things out. No, this was a one-shot deal (both under the directed panspermia and front-loading hypotheses).
I see where is this indicated or required? I agree that if you hypothesize that said aliens only visited Earth once and made precisely one "drop", then by definition, we should necessarily expect the effects of a single "drop" -- no precursors, for example. But that's an arbitrary story, no? If we mutate GFLT's narrative (and it's a narrative not a model!) and produce Eigenstate's Front Loading Theory (EFLT), where I take your narrative and just suppose our alien overlords were here for many thousands of years tinkering, and produced scads of precursors, now EFLT predicts precursors and prototypes, and GFLT doesn't. But NEITHER proceeds from a model! The precursors or no precursors are just restatements of the narrative. Work it the other way to see what I'm getting at perhaps. Begin with GFLT, and suppose we are subsequently confronted with all sorts of prescursor and prototypes in the evidential record. No problem, tweak GFTL into EFLT, and boom! no you have got the predictions you needed to support precursors. This is the "oracle" problem, where the hypothesis is not constrained, and is not a concrete model, and so becomes an oracle that retroactively predict anything at all we should find. Did we find lots of monophosphate rich precursor components? Ahh, well, we can just adapt our "front loading hypothesis" to accommodate an alien research program that provides for that! QED! Right? This is where the semantics of 'hypothesis' become important. It's not just a casual conjecture, a narrative we might entertain. It provides some warrant for its model features, and then produces predictions that flow from it necessarily. We can change our hypothesis, but if the hypothesis is plastic enough to confess any evidence we might encounter retrospectively, we've not got anything to work with epistemically. You are just just dealing in just-so stories and tautologies. "The aliens set up front loading so that we would find [insert your favorite observation from reality here]", doesn't ground a scientific prediction. By definition, it must have been "so that we would find...."
Further, the engineers were rational agents, thus the initial life forms on earth contained a genetic code virtually identical to the universal optimal code on earth today. No room for precursor genetic codes.
That doesn't follow AT ALL. Why should we think that a "rational agent" would provide a genetic code that is "optimal" (nevermind the problem of what 'optimal' means -- I've asked you elsewhere)? I don't see any problem with identifying that as a possibility. But it's not an entailment. You don't have what you must have to produce a scientific prediction, here, Genomicus. It can't be 'this is one possibility'. It may be a possibility, a plausible scenario. But if it might have been any number of other ways, under the same set of assumptions you are making, then the witness of the evidence can't help you with your hypothesis. It can't be confirmed or disconfirmed by the evidence, because the hypothesis doesn't dictate what must necessarily be found, if the hypothesis is true.eigenstate
February 3, 2012
February
02
Feb
3
03
2012
09:14 PM
9
09
14
PM
PDT
@junkdnaforlife,
eig: What is your current hypothesis for the origin of the genetic code?
I don't have a hypothesis to propose beyond the ideas current with researchers in abiogenesis -- RNA world as a precursor context, etc.
Why are the elements represented by the values, a,t,g,c selected from a hypothetical matrix and assigned values, i.e. functional strings in the genetic code, but the vast majority of combinations are functionless strings?
Selection, of course. Functional configurations persist and reproduce better than non-functional configurations. So where you have some "bootstrapping configuration" that replicates, you have function at LEAST insofar as replication is enabled. If you just think about configurations that maintain and support replication, you can see that you will necessarily, over time, have organism around that feature replication functionality. That's a truism. The organisms that DON'T have replicative functionality die out, so they are not around to consider!
What is the difference between you assigning a specific value to a variable in an algorithm, and the specific assignment of functional sequence strings in the genetic code?
In C, for example, I might write a statement like this: int x = 4; The "int x" declares storage space for a 16 bit number, and the "=4" tells the compiler to generate the bit pattern for 4 ("0000000000001100") and copy it to the location for the variable "x" we just allocated. I can get you that far, but don't know what you mean by "specific assignment of functional sequence strings in the genetic code". Give me an example of such an example, and I'll see what that looks like against my "x=4" in a computing environment. What is a "specific" assignment as opposed to a "non-specific" assignment?eigenstate
February 3, 2012
February
02
Feb
3
03
2012
08:49 PM
8
08
49
PM
PDT
@Genomicus#8.1,
By “quite complex” I mean more complex than would be assumed under a non-teleological framework.
I don't think that helps. How complex would LUCA be assumed to be under a non-teleological framework? The only non-teleological framework I'm familiar with that IS a framework (model) would be evolutionary theory, and I don't know what the complexity of LUCA would be in that case, or even what you mean by complexity for any LUCA. Are you talking about complexity in terms of K-C theory against its gene sequences? You don't need to use LUCA for your answer, pick a bacteria or some such -- something -- and show me how you would calculate complexity, and that would be a great start. More problematic though, is that you say this in your prediction:
Biological complexity. Front-loading predicts that the last universal common ancestor (LUCA) was quite complex, complete with genes necessary for the origin and development of metazoan life forms.
Evolutionary theory produces the same tautology (it's not a prediction, yet, so far as I can see). Per evolution, LUCA was, BY DEFINITION, complete with all the genes necessary for the origin and development of metazoan life forms. Similarly, Genomicus-Front-Loading-Theory (GFLT) says that LUCA, BY DEFINITION, was complete with the genes necessary for the origin and development of metazoan life forms. On both hypotheses, here we are, after all! I know I'm having trouble getting through to you on this, but maybe this response makes headway regarding tautologies? You've not differentiated your GFLT here from evolutionary theory -- both suppose that LUCA had all the right stuff for the later development of the diverse forms that came later. Evolution uses a different (I think, I'll though I'm still not sure, given what you've said) mechanism than GFLT, relying on undirected and stochastic inputs for variations that get filtered and accumulated (or not) based on the natural selection, but in both cases, LUCA was complete with everything it needed for the future. This must be true, right? Else, LUCA wouldn't be LUCA! The term LUCA implies the very very thing you suppose you are prediction, in the definition of the term. Back to complexity, if you can quantify the complexity entailed by GFLT for LUCA, maybe we're getting somewhere. It's perhaps kosher in principle, then, as a prediction -- if you say LUCA had complexity greater than X (where complex is defined in concrete terms) -- but even if that is epistemically valid as a prediction, it seems practically impossible to test your prediction. We have no LUCA to test, and can't expect to. If we are able to "extrapolate back", there's a problem as that extrapolation would rely on rules for extrapolation that nullify your prediction; you have to use the theory to do the extrapolation! Nevertheless, if you can give me something on calculating the complexity of LUCA, or any organism or genome, that's somewhere to start. On the entailment thing, this is a problem. Maybe I can go find some links or other books or resources to point you to that will succeed where I've failed here, but clearly, the nature of "entailment" generally, and the use of entailment for scientific predictions that proceed from your models specifically, has escaped you. If we can get over that hurdle, then we're golden, at least for the next step in looking at this.eigenstate
February 3, 2012
February
02
Feb
3
03
2012
08:17 PM
8
08
17
PM
PDT
Oh what the heck, I'll give you one more clue; Non-Local Quantum Entanglement In Photosynthesis http://vimeo.com/30235178bornagain77
February 3, 2012
February
02
Feb
3
03
2012
07:37 PM
7
07
37
PM
PDT
Why is “quite complex” entailed by your hypothesis? What qualifies as “quite complex” and how do you measure complexity?
By "quite complex" I mean more complex than would be assumed under a non-teleological framework. Why is this entailed in the hypothesis? Precisely because front-loading requires that the initial life forms contained advanced machinery etc. to terraform the earth and front-load future states.Genomicus
February 3, 2012
February
02
Feb
3
03
2012
07:35 PM
7
07
35
PM
PDT
Another little clue for you champ; The ATP Synthase Enzyme – exquisite motor necessary for first life – video http://www.youtube.com/watch?v=W3KxU63gcF4bornagain77
February 3, 2012
February
02
Feb
3
03
2012
07:30 PM
7
07
30
PM
PDT
And Champ so in your simplistic view of reality, as long as pour energy into a open system then everything is hunky dory in your book as far as the generation of functional prescriptive information??? Perhaps you should think just a little more carefully about what you are actually claiming! Evolution Vs. Thermodynamics – Open System Refutation – Thomas Kindell – video http://www.metacafe.com/watch/4143014 Please give me the exact proportional equivalence to the amount of function information I can expect in a open system from the amount of energy I pour into it! :) I will give you a clue, the more raw energy you pour into a system the more you will destroy the functional information in that open system!bornagain77
February 3, 2012
February
02
Feb
3
03
2012
07:11 PM
7
07
11
PM
PDT
If this designer/front-loader were producing prototypes, tests (as I would if I were going to do some “virtual engineering” in a software context simulating biology), that would leave prototypes and precursors behind, no?
The front-loading hypothesis is a direct extension of the directed panspermia hypothesis proposed by Crick and Orgel in 1973. Read their paper, and perhaps you'll have a better understanding of what's going on here. Directed panspermia proposes that the earth was intentionally seeded with life forms from a distant location. It does not propose that the designers were present on earth constantly trying new things out. No, this was a one-shot deal (both under the directed panspermia and front-loading hypotheses). Further, the engineers were rational agents, thus the initial life forms on earth contained a genetic code virtually identical to the universal optimal code on earth today. No room for precursor genetic codes.Genomicus
February 3, 2012
February
02
Feb
3
03
2012
07:06 PM
7
07
06
PM
PDT
The fail point here in this item is “so why would a front-loader choose cytosine as a base in DNA?”. It’s not sufficient to offer us *a* reason why you think cytosine would be chosen (and this is particularly devastating if you are offering this putative prediction in the context of an “intelligent design” explanation, an explanation with an unknown, inscrutable, mysterious designer). The choice must follow NECESSARILY from the hypothesis.
Well, actually, unlike many other ID concepts, the front-loading hypothesis does posit that the designer(s) of the first genomes on earth were advanced nanotechnologists who were rational agents. Thus, it would make sense for us to propose a possible rational reason for why a nanotechnologist would choose cytosine as a base in the DNA molecule. Given that C --> T transitions lead to an increase in hydrophibicity, this is a clue that a designer chose this base precisely because of it is prone to deamination. So, we offer this tentative prediction from a front-loading perspective. If it was indeed discovered that cytosine deamination has played a key role in the course of evolution, then this would be a nice chunk of evidence for the front-loading hypothesis.Genomicus
February 3, 2012
February
02
Feb
3
03
2012
06:57 PM
6
06
57
PM
PDT
Just wanted to thank you for these posts, Genomicus. I haven't read them all thoroughly yet, but they are bookmarked. Well, done :)Elizabeth Liddle
February 3, 2012
February
02
Feb
3
03
2012
12:18 PM
12
12
18
PM
PDT
Hi Collin,
I would like to hear your explanation. What I was told in college was that evolution is not a violation of the 2nd law because earth is constantly bathed in high ordered energy. So earth has the resources to slowly evolve life utilizing that high ordered energy.
That's basically right. Very loosely speaking, the second law says that in an isolated system, disorder (aka entropy) must either stay the same or increase. However, the earth is not an isolated system. It receives order in the form of sunlight, and this order gets used to grow plants, create fossil fuels, drive the weather, and so on. It also drives the process of evolution. The second law allows the earth's disorder to decrease, but only if the disorder of earth's surroundings increases by an equal or greater amount. The sun is part of the earth's surroundings. In the process of creating sunlight by hydrogen fusion, the sun increases its disorder (entropy) by a huge amount that is more than enough to offset the decrease in disorder (entropy) on earth due to received sunlight. Thus the second law is not violated by the growing of plants, the formation of storms, or evolution. (Conversely, if evolution did violate the second law, then so would the growth of plants. The second law would be violated 24 hours a day, in which case it wouldn't be a law at all! So if you hear someone like BA77 claiming that evolution violates the second law, ask him if he believes that plants do also.)
What causes me to doubt is that many celestial objects are bathed in high ordered energy but don’t seem to exhibit anything near as complex and interconnected as life. Indeed, ultraviolet radiation seems to destroy life rather than foster it despite ultraviolet radiation being high ordered energy. Why don’t we see life or something just as complex on mercury, venus mars or elsewhere (so far)?
The key point is that the second law doesn't guarantee that interesting things will happen if sunlight enters a system. It allows interesting things to happen, but it doesn't require them. If you put a slab of granite out in the sun, it will warm up, but not much else will happen. With regard to evolution, the one and only thing the second law tells you is that you will not see evolution on earth unless there is a compensating increase in the disorder (entropy) of earth's surroundings. That's it. It doesn't guarantee that you will see evolution if those conditions are met, but it doesn't rule out evolution under those circumstances either. This is where Granville Sewell gets confused. He writes things like this:
It is commonly argued that the spectacular increase in order which has occurred on Earth does not violate the second law of thermodynamics because the Earth is an open system...
That part is true. He continues:
...and anything can happen in an open system as long as the entropy increases outside the system compensate the entropy decreases inside the system.
That part is ridiculous. Scientists don't claim that anything at all can happen in an open system as long as those conditions are met. They know that's not true, and they would never make such an idiotic claim. They only claim that the second law itself doesn't forbid things from happening, as long as those conditions are met. There may very well be other reasons why something can't happen, but as long as those conditions are met, the second law itself doesn't forbid it. The second law only forbids violations of the second law, and nothing else. Granville continues:
Thus, unless we are willing to argue that the influx of solar energy into the Earth makes the appearance of spaceships, computers and the Internet not extremely improbable, we have to conclude that the second law has in fact been violated here.
Granville has forgotten the key fact about the second law: The second law forbids violations of the second law, and nothing else. The appearance of spaceships, computers and the Internet on earth does not violate the second law as long as the entropy of earth's surroundings increases by a sufficient amount. Granville clearly doubts that evolution spontaneously happened on earth. He believes that spaceships, computers and the Internet would not have appeared if God hadn't intervened to bring humans into existence. But these doubts have nothing to do with the second law, because the second law by itself doesn't forbid these things from happening. Granville is trying to wrap his personal incredulity in the mantle of the second law, but it doesn't work, because the second law only forbids violations of the second law, and nothing else.champignon
February 3, 2012
February
02
Feb
3
03
2012
01:08 AM
1
01
08
AM
PDT
eig: What is your current hypothesis for the origin of the genetic code? Why are the elements represented by the values, a,t,g,c selected from a hypothetical matrix and assigned values, i.e. functional strings in the genetic code, but the vast majority of combinations are functionless strings? What is the difference between you assigning a specific value to a variable in an algorithm, and the specific assignment of functional sequence strings in the genetic code?junkdnaforlife
February 2, 2012
February
02
Feb
2
02
2012
07:59 PM
7
07
59
PM
PDT
ba77 from 2.1.1.1.25:
And thus, once again, since empirical evidence has final say in the scientific method, then it is on the one who contests my claim to demonstrate, empirically, that it is otherwise!
Yep, empiricism is everything. And the 2nd is just math. Empirically, thus far, it has held; as long as you ignore issues of decay products in particle physics (outside the error bars) and the postulates about virtual photons (conjuration for the math). But anywhere else you're not going to lose money betting on the side of the 2nd unless someone rigs a bar-bet against you. But the 2nd does not disbar the standard narrative of Darwinism; by its own math. There are other reasons to object to things, but the 2nd isn't one of them. Specifically, any prophecy about the past that acknowledges that the sun spews things this-a-way cannot be in violation. At least not prima facia.
i.e. more to the point, why is the assumption that randomness can possibly generate functional information, though no one has ever seen this (Abel) given precedence over the fact that randomness, as far as the evidence can tell us, consistently destroys functional information???
Religious mythology. That's the short answer. The statement that there cannot be islands of apparent violation is religious mythology also. Both of which have nothing to do with inspecting and characterizing a chaotic system. Hint: You can't do it with exponential functions and fractional children unless there's a lot of sawmill accidents involved.Maus
February 2, 2012
February
02
Feb
2
02
2012
07:04 PM
7
07
04
PM
PDT
@Genomicus
3. Biological complexity. Front-loading predicts that the last universal common ancestor (LUCA) was quite complex, complete with genes necessary for the origin and development of metazoan life forms.
Why is "quite complex" entailed by your hypothesis? What qualifies as "quite complex" and how do you measure complexity? I read back a bit, but didn't see anywhere you laid out your terms. If you have a link to where you have provided the semantics and measurements for complexity your using, I can just read that, thanks. As you've stated, I don't think this prediction will provide any possible lift for your hypothesis, no matter what the evidence is. This is yet another entailment fail (with a kicker of undefined measures). It's good to point to a solid example by way of comparison. Einstein's GR proposal necessarily predicted the precession of the perihelion of Mercury. It could not have been otherwise, per Einstein's proposed model. That is the key linkage you are missing in all of your prediction proposals, so far, from what I've seen. GR also entails the observation of redshift in electomagnetic radiation traversing areas of gravitational distortion (e.g. stars, planets). It's not a "could be", but a "must be" given GR's model.eigenstate
February 2, 2012
February
02
Feb
2
02
2012
04:49 PM
4
04
49
PM
PDT
2. The genetic code. The front-loading hypothesis proposes that the universal optimal genetic code was present at the dawn of life: in other words, we won’t find precursors of sub-optimal genetic codes, because the genetic code was optimal from the start. Further, the front-loading hypothesis predicts that all 20 amino acids would have been used in the first life forms, and that the transcription, translation, and proof-reading machinery would have all been present at the start of life on earth.
If this designer/front-loader were producing prototypes, tests (as I would if I were going to do some "virtual engineering" in a software context simulating biology), that would leave prototypes and precursors behind, no? If not, then your hypothesis asserts that the designer/front-loader made all the front loading happen in a "single pass". That way, you would have an entailed prediction -- no prototypes or precursors should be found by the "single pass" designer/front loader. But that just pushes back your problem: what is it in your model that indicates "single pass" vs. "iterative tinkering"? Even so, this is better in the sense that you have (if you are explicit about "single pass" front-loading, or "always perfect front-loading", never mind why that would be for now) entailment. More pressing, though: what do you mean by "optimal", here? How would we test genetic codes to see if they are optimal? I can't get a handle on what it MEANS in this context, let alone how you would establish or measure that empirically, How do you do that?eigenstate
February 2, 2012
February
02
Feb
2
02
2012
04:38 PM
4
04
38
PM
PDT
@Genomicus
Cytosine deamination. Of the three bases in DNA (adenine, guanine, and cytosine) that are prone to deamination, cytosine is the most likely to undergo deamination. This ultimately results in a C –> T transition. Cytosine deamination often causes severe genetic diseases in humans, so why would a front-loader choose cytosine as a base in DNA? It has been observed that C –> T transitions result in a pool of strongly hydrophobic amino acids, which leads to the following prediction from a front-loading perspective: a designer would have chosen cytosine because it would facilitate front-loading in that mutations could be channeled in the direction of increased hydrophobicity. This prediction would be confirmed if key protein sequences in metazoan life forms were the result of numerous C –> T transitions. It's peculiar that you appear to understand concepts like "deamination", using them in context, sensibly, etc., and yet the concept of predictions produced by scientific hypotheses and models seems totally foreign to you. The fail point here in this item is "so why would a front-loader choose cytosine as a base in DNA?". It's not sufficient to offer us *a* reason why you think cytosine would be chosen (and this is particularly devastating if you are offering this putative prediction in the context of an "intelligent design" explanation, an explanation with an unknown, inscrutable, mysterious designer). The choice must follow NECESSARILY from the hypothesis. You are quite conspicuously working backwards from your conclusion. Coming up with a plausible choice -- and given an unspecified, unknown, potentially omniscient and omnipotent designer, ALL choices are plausible -- does not ground a prediction. First you lay out the hypothesis, the proposed mechanism, and then you deduce from that NECESSARY implications that proceed from that. If you can affirm what is entailed from your model, you got something! Sometimes those predictions are trivial or banal, and so don't carry much weight. Other times they just don't distinguish the hypothesis from other, competing hypotheses. But in this case, if you COULD establish that such a choice was ENTAILED from your proposed model, that would be quite substantial, indeed, I think. As it is, though, it's a miss.
eigenstate
February 2, 2012
February
02
Feb
2
02
2012
03:43 PM
3
03
43
PM
PDT
vjtorley: Thanks for your comment, and I hope I can write up some more articles in the future. Also a huge thanks again to kairo for giving me this opportunity.Genomicus
February 2, 2012
February
02
Feb
2
02
2012
02:34 PM
2
02
34
PM
PDT
Starbucks:
I think there is one key difference between your perspective on front-loading, and Mike Gene’s. That is, in my opinion, that Mike Gene doesn’t really concern himself with the question of how “How could sophisticated molecular systems be front-loaded?” For him it’s not a matter of what can or can’t happen, but what did happen. As such I don’t think he allows ID type complexity arguments into his thinking on FLE, unless I’m mistaken.
I have a feeling you're misunderstanding my position. When I pose the question "how were sophisticated molecular systems front-loaded," I'm using that as a question we can ask once we have some pretty good evidence that a molecular system was indeed front-loaded, instead of purely the product of the blind watchmaker.Genomicus
February 2, 2012
February
02
Feb
2
02
2012
02:30 PM
2
02
30
PM
PDT
That sounds like an impressive spec, but what with horizontal transfer, it becomes problematic defining what you mean by LUCA.
Please elaborate on why horizontal gene transfer makes defining LUCA a problem. Thanks.
The first metazoans would have most of the genes found in modern ones, since they were invented by microbes.
Yes, but this prediction involves going back to the very first life form, the ancestor of all living organisms. At the moment, it's quite difficult to determine just what its genome was like, so this is a prediction for future years. The FL model predicts that this common ancestor to all life forms was quite complex, complete with the universal optimal genetic code, etc.
So OOL research is relevant or not, assuming it eventually supports a bootstrap scenario?
OOL research is relevant to the discussion, of course. Not sure where you're going with this line of thought.Genomicus
February 2, 2012
February
02
Feb
2
02
2012
02:26 PM
2
02
26
PM
PDT
Genomicus, Thanks very much for a very stimulating paper. I look forward to reading more papers from you in the future. Thanks again.vjtorley
February 2, 2012
February
02
Feb
2
02
2012
01:28 PM
1
01
28
PM
PDT
KF, Genomicus, thank you for this fascinating post. It gets my imagination going. I think that this a hypothesis is ripe for experimentation. Could it be shown that there were chicken genes in a dinosaur ready to be expressed? Or can the platypus be shown to be a library for future species?Collin
February 2, 2012
February
02
Feb
2
02
2012
12:06 PM
12
12
06
PM
PDT
1 2 3 4

Leave a Reply